March 9, 2025

🔗 cursor-large-projects

Most of the twitter conversation about cursor has been about how it’s a great tool for small projects. I think that’s a shame, because I think it’s a great tool for large projects.

Cursor is a powerful tool for maintaining large coding projects, helping developers code 5-30 times faster. The guide emphasizes the importance of an effective edit and test loop to improve code quality. By setting up proper documentation and workflows, engineers can leverage AI for refactoring, documentation, and project planning.

I love the ai folder specific to the project. I also appreciate how you can have Cursor run tests which isn’t something I’ve done a lot of yet, but look forward to trying.

🔗

January 31, 2025

Eval datasets and frameworks survey

The rapid pace of model development means everyone’s on a never-ending quest to figure out if the latest model is actually better than its predecessor. Public benchmarks are essential, but they usually only paint part of the picture. By rolling your own evaluations, you get a direct view of how a model handles tasks that matter to your team—like domain-specific question-answering, custom code generation, or weird edge cases unique to your product.

The first part of a hopeful series is to conduct a survey of popular evaluation datasets and a quick description of each.

Evaluation Datasets

  1. TruthfulQA – Tests how well a model avoids repeating human falsehoods. Comes in generative and multiple-choice variants. Great for checking whether your model parrots misinformation.
  2. Lab Bench – A robust, biology-focused dataset with 30 subtasks like protocol troubleshooting and sequence manipulation. Perfect if you’re dealing with scientific research workflows.
  3. SWE-bench – Focuses on real GitHub Issues. Ideal if your team wants to evaluate code quality, debugging capabilities, or how well a model handles real-world developer workflows.
  4. RE-Bench – Specifically probes AI’s R&D capabilities in a controlled environment, letting you compare model performance against human benchmarks.
  5. GPQA – Graduate-level multiple-choice questions from actual PhD students. This is great if you’re dealing with advanced scientific or technical reasoning tasks that require real depth.
  6. Frontier Math, GSM8K, MATH, and DeepMind Mathematics – For math-savvy teams, these are gold. They test everything from grade-school arithmetic to high-level theorem solving.
  7. HellaSwag, WinoGrande, and MMLU Benchmark – If you want to test common-sense reasoning, logic, or broader knowledge capabilities, these cover a wide range.
  8. ARC (Abstraction and Reasoning Corpus) – Good for puzzles that test a model’s ability to identify patterns without explicit instructions.
  9. PopQA – Useful for stress-testing how well a model retains or “forgets” entity-specific information over multiple turns in conversation.
  10. HumanEval, BigCodeBench – If you need to see how your model handles code generation or code QA.
  11. IfEval-OOD, HREF, BigBenchHard, DROP – More specialized sets that target out-of-distribution reasoning, reading comprehension, or advanced multi-step logic.

Evaluation Frameworks

  1. Olmes – A new tool that simplifies loading, running, and reporting benchmarks on your model.
  2. llm-evaluation-harness by EleutherAI – One of the most established frameworks, supporting a ton of datasets and easy to customize for your own data.

I’m sure I’m missing some. If you know of any, please let me know. For now I think this puts one in a good position to start clicking around and researching which of these datasets are most relevant to their use case. From there, you can use oen of the evaluation frameworks to run through the curated dataset.

🔗

January 20, 2025

Running Deepseek R1 Locally

If you want to see a model “think” on your local machine, here is a quickstart:

  1. Download a distilled version of the model: ollama run hf.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF:Q8_0
  2. Chat with the model there OR if you have the LLM CLI tool installed and the llm-ollama plugin (llm install --upgrade llm-ollama), you can run:
    llm -m 'hf.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF:Q8_0' \
        'Flagship Pioneering is a Venture Creation company that creates companies. Come up with 5 new company ideas for them'
    

I’ve found the outputs to be mediocre. It’s a distilled model, etc. etc. and a better comparison would be with their public API; however, it’s facsinating to see the model “think” on your local machine.

🔗

January 14, 2025

🔗 o1-not-a-chat-model

There has been a lot of chat about the new reasoning models, and how they are not chat models. I completely agree, and want to add my voice to the chorus.

The focus of this is that an o1 prompt should look a lot different than your typical chat:

O1 Prompt Structure

Understanding this prompt anatomy is crucial because it fundamentally changes how organizations need to approach AI implementation. While the structure might make sense to technical users, rolling this out across an enterprise presents unique challenges. The shift from quick chat-style interactions to detailed, structured briefs impacts everything from user training to workflow design.

The enterprise challenge with o1 isn’t just about adopting new tech - it’s about fundamentally changing how people work with AI. While chat models let users dive in with quick questions and iterate, o1 demands what I’d call “front-loaded effort” - you need to dump ALL the context upfront and carefully frame what you want. This creates an interesting tension for enterprise adoption: On the upside, o1’s report-style outputs actually align really well with enterprise needs. You get structured, thorough analysis that reads like a proper business document rather than a casual chat. Perfect for decision-making and documentation. But here’s the catch - teaching busy enterprise users to write these detailed briefs is tough. They’re used to the quick back-and-forth of chat models or traditional tools. Now we’re asking them to:

  • Front-load all context (which means gathering it first)
  • Clearly define outputs (no vague requests)
  • Wait longer for responses (potentially 5+ minutes)

For enterprise rollouts, I think this means:

  1. Training needs to shift from “how to chat with AI” to “how to brief AI”
  2. Expectations around response times need resetting (not usually a big deal)
  3. Best practices around context gathering need development

The real kicker? Just when enterprises were getting comfortable with chat-based AI, this paradigm shift forces another round of change management. It’s like teaching someone a new language right after they got comfortable with the first one.

To make this work, enterprises might need dedicated “AI prompt engineers” - people who can bridge the gap between users and these more demanding but powerful models. Think of them as technical writers for the AI age. If not dedicated people, then companies could consider dedicated projects and engagements focused on bringing reasoning prompts to business users.

Additionally, it’d be helpful to start sharing the art of the possible for business users with reasoning models like o1. Let me share three practical examples where business users could leverage reasoning models effectively:

Quarterly Report Analysis: Instead of asking quick questions about numbers, dump the entire quarterly spreadsheet, previous reports, and industry context into the model and ask for a comprehensive analysis. The model can identify trends, flag concerns, and create executive summaries - all in one thorough shot. Much better than piecemeal analysis through chat.

Meeting Summary & Action Plans: Take a raw meeting transcript, add the project background, team structure, and goals, then ask the model to create a structured output with: key decisions, action items, risks identified, and next steps. The model’s ability to process all this context at once means better synthesis than parsing piece by piece.

Policy Compliance Review: Perfect for legal or HR teams. Feed in your company policies, industry regulations, and a proposed new process or policy. The model can do a thorough gap analysis, identifying compliance risks and suggesting specific updates. Much more reliable than trying to check compliance point-by-point through chat. Plus, the model’s formal report style matches the serious nature of compliance work.

RFP Response Analysis: For procurement or sales teams, dump in the entire RFP document, your company’s past proposals, competitor intel, and pricing strategy. Ask for a detailed analysis of what sections need focus, suggested win themes, pricing recommendations, and potential red flags. The model’s ability to process all this context at once helps create a cohesive strategy instead of answering one requirement at a time.

The key theme here? These aren’t quick Q&A tasks - they’re meaty problems where the user invests time upfront to get comprehensive, actionable insights in return. Think “weekly deep-dive” rather than “quick daily check.”

Bottom line: Treat o1 like the powerful reasoning engine it is - with proper training and support - and it’ll transform your business. Treat it like ChatGPT, and you will likely struggle with user frustration and poor results.

January 11, 2025

🔗 Agents

I came across Chip Huyen’s blog via twitter today. Her blog is great and has a lot of content I’m going to read. I’ll start with this post on Agents.

Chip defines Agents as:

An agent is anything that can perceive its environment and act upon that environment. Artificial Intelligence: A Modern Approach (1995) defines an agent as anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.⁠

She then explains that the interaction between an agent and the environment is the key to the agent’s success. If you build a web scraping agent, the internet is the environment. If you build an agent that can play Minecraft, the Minecraft world is the environment. And so on.

Planning is the heart of an agent, and in the blog, Chip provides a lot of detail on it. Planning is decoupled from execution. The plan requires that the model understands the user’s intent and requires that the model breaks the task down into smaller subtasks. You can do a lot to help improve the likelihood of success - asking for human feedback, writing great system prompts, giving better descriptions of the tools available to the agent, use a stronger model, or even fine tune a model.

To me it seems that with the current state of LLMs, the most effective manner to improve the planning aspect of an agent is to ask for human feedback. There may be veritcals, like coding or math, where the plan is easier for an LLM to generate, but for the long tasks, I think asking for human feedback is the best way to improve the agent. I wonder if we’ll ever see the reasoning models (o1, o3, etc) allow users to adjust hte plan on the fly. I know that Google’s Gemini 1.5 Deep Research has a “refine” feature that allows you to adjust the plan already.

The post is great, and the blog looks promising. I’ll be back to Chip’s writing!

January 8, 2025

Running Ollama Vision Models Locally

When I was checking out the new Phi model with Ollama. I came across the LLM Ollama plugin. It reminded me that Ollama now supports vision models.One of the use cases I’ve been thinking about is script that can run against all images in my downloads folder and generate a description of each image. Many of them have obscure names, and I’d like to be able to search them by description.

Here is my attempt to do this. The script is as simple as:

for file in ~/Downloads/*.{jpg,jpeg,png,gif,webp}; do
    echo "Processing $file"
    llm -m llama3.2-vision:11b "Describe this image." -a "$file"
done

To run this, you’ll need to install LLM and Ollama. Then you’ll need to install the llama3.2-vision model:

ollama pull llama3.2-vision:11b

and install the plugin:

llm install --upgrade llm-ollama

This is remarkably simple and produces really helpful descriptions.

Ollama Vision Local

The image shows a man running across the finish line of a triathlon, with a red archway and a crowd of people watching. The purpose of the image is to capture the moment of triumph for the athlete as he completes his race.

* A man:
        + Wearing a black triathlon suit
        + Running towards the camera
        + Has a number on his bib
        + Appears to be exhausted but determined
* A finish line:
        + Red archway with white text that reads "FINISH"
        + Surrounded by spectators and officials
        + Located at the end of a road or track
* A crowd of people:
        + Standing behind barriers, watching the athlete cross the finish line
        + Cheering and taking photos
        + Dressed in casual clothing, with some wearing team jerseys

The image conveys a sense of excitement and accomplishment, as the athlete reaches the end of his grueling triathlon. The crowd's enthusiasm adds to the celebratory atmosphere, making it clear that this is a momentous occasion for all involved.

It can obviously be modified to run against any file or folder that you want. I could imagine doing some intelligent document processing on this to help group similar files together and cleaning up my downloads folder.

Lastly, if you prefer a web interface for more adhoc usage. You can use Open WebUI and uvx to run it locally:

uvx --python 3.11 open-webui serve

This will automatically find the Ollama models you have, including your image model.

January 7, 2025

IT Department to HR Department

The IT department of every company is going to be the HR department of AI agents in the future.

Jensen Huang, CEO of NVIDIA @ CES 2025

I love this quote from Jensen. It is aligned with my vision that the bar and output for software engineering is rapidly increasing, and the role of every software engineer will become more managerial over time.

January 5, 2025

AI Git Message

Ever find yourself staring at your terminal, struggling to write the perfect git commit message? I recently added a game-changing alias to my zshrc that leverages the power of LLMs to generate commit messages automatically. Using Simon Willison’s llm CLI tool – a Swiss Army knife for interacting with language models directly from your terminal – this alias pipes your git diff through an AI model to generate concise, descriptive commit messages. It’s like having a pair programmer who’s really good at documentation sitting right next to you. Implementation is as simple as adding a single line to your zshrc, and just like that, you’ve automated one of development’s most tedious tasks. While it might feel a bit like cheating, I’ve found the generated messages are often more consistent and informative than my manual ones, especially during those late-night coding sessions.

alias gitmessage='git diff | llm " Below is a diff of all staged changes, coming from the command:
\`\`\`
git diff
\`\`\`
Please generate a concise, one-line commit message for these changes."'
🔗

December 28, 2024

🔗 Practical Text to SQL from LinkedIn

A great article on how to use LLMs to generate SQL queries from natural language. Albert Chen’s deep dive into LinkedIn’s SQL Bot offers a fascinating glimpse into the marriage of generative AI with enterprise-scale data analytics. This multi-agent system, integrated into LinkedIn’s DARWIN platform, exemplifies how cutting-edge AI can democratize access to data insights while enhancing efficiency across teams.

Key Takeaways: Empowering Data Democratization: SQL Bot addresses a classic bottleneck: dependency on data teams for insights. By enabling non-technical users to autonomously query databases using natural language, LinkedIn has transformed a time-intensive process into a streamlined, scalable solution.

Data Cleaning and Annotation:

we initiated a dataset certification effort to collect comprehensive descriptions for hundreds of important tables. Domain experts identified key tables within their areas and provided mandatory table descriptions and optional field descriptions. These descriptions were augmented with AI-generated annotations based on existing documentation and Slack discussions, further enhancing our ability to retrieve the right tables and use them properly in queries

  • They infer the tables that a user cares about based on their org chart. Metadata and Knowledge Graphs as Pillars:
  • Comprehensive dataset certification, enriched with AI-generated annotations, ensures accurate table retrieval despite LinkedIn’s vast table inventory (in the millions!). By combining domain knowledge, query logs, and example queries into a knowledge graph, SQL Bot builds a robust contextual foundation for query generation.
  • LLM-Driven Iterative Query Refinement: Leveraging LangChain and LangGraph, the system iteratively plans and constructs SQL queries. Validators and self-correction agents ensure outputs are precise, efficient, and error-free, highlighting the sophistication of LinkedIn’s text-to-SQL pipeline.

Personalized and Guided User Experiences: With features like quick replies, rich display elements, and a guided query-writing process, SQL Bot prioritizes user understanding and engagement. Its integration with DARWIN, complete with saved chat history and custom instructions, amplifies its accessibility and adoption.

Benchmarking and Continuous Improvement: They have an emphasis on benchmarking. They use human evaluation alongside LLM-as-a-judge methods, so they’ve developed a scalable approach to query assessment and model enhancement.

Reflections: Many text-to-SQL solutions stumble in handling many tables, LinkedIn’s SQL Bot thrives by leveraging metadata, personalized retrieval, and user-friendly design. It’s also impressive how the system respects permissions, ensuring data security without sacrificing convenience.

Moreover, the survey results—95% user satisfaction with query accuracy—highlight the system’s impact. This balance of technical innovation and user-centric design offers a blueprint for organizations looking to replicate LinkedIn’s success.

Why It Matters: I think there are enough details in this article to either add new features to your existing text to sql bot, or to build your own. LinkedIn’s work on SQL Bot is a testament to the power of AI in reshaping how we interact with complex data systems. It’s an inspiring read for engineers, data scientists, and AI enthusiasts aiming to make SQL data more accessible.

🔗

December 27, 2024

🔗 Cognitiave load importance

A live article on measuring cognitive load by Artem Zakirullin. It’s being cotinuously updated, but one clip that resonates with me is:

Once you onboard new people on your project, try to measure the amount of confusion they have (pair programming may help). If they’re confused for more than ~40 minutes in a row - you’ve got things to improve in your code.

With AI tooling, the amount of code is bound to grow. We should make sure to be cognizent of the cognitive load required to maintain it all.

🔗
View All Posts