Most of the twitter conversation about cursor has been about how it’s a great tool for small projects. I think that’s a shame, because I think it’s a great tool for large projects.
Cursor is a powerful tool for maintaining large coding projects, helping developers code 5-30 times faster. The guide emphasizes the importance of an effective edit and test loop to improve code quality. By setting up proper documentation and workflows, engineers can leverage AI for refactoring, documentation, and project planning.
I love the ai folder specific to the project. I also appreciate how you can have Cursor run tests which isn’t something I’ve done a lot of yet, but look forward to trying.
The rapid pace of model development means everyone’s on a never-ending quest to figure out if the latest model is actually better than its predecessor. Public benchmarks are essential, but they usually only paint part of the picture. By rolling your own evaluations, you get a direct view of how a model handles tasks that matter to your team—like domain-specific question-answering, custom code generation, or weird edge cases unique to your product.
The first part of a hopeful series is to conduct a survey of popular evaluation datasets and a quick description of each.
Evaluation Datasets
TruthfulQA – Tests how well a model avoids repeating human falsehoods. Comes in generative and multiple-choice variants. Great for checking whether your model parrots misinformation.
Lab Bench – A robust, biology-focused dataset with 30 subtasks like protocol troubleshooting and sequence manipulation. Perfect if you’re dealing with scientific research workflows.
SWE-bench – Focuses on real GitHub Issues. Ideal if your team wants to evaluate code quality, debugging capabilities, or how well a model handles real-world developer workflows.
RE-Bench – Specifically probes AI’s R&D capabilities in a controlled environment, letting you compare model performance against human benchmarks.
GPQA – Graduate-level multiple-choice questions from actual PhD students. This is great if you’re dealing with advanced scientific or technical reasoning tasks that require real depth.
Frontier Math, GSM8K, MATH, and DeepMind Mathematics – For math-savvy teams, these are gold. They test everything from grade-school arithmetic to high-level theorem solving.
HellaSwag, WinoGrande, and MMLU Benchmark – If you want to test common-sense reasoning, logic, or broader knowledge capabilities, these cover a wide range.
ARC (Abstraction and Reasoning Corpus) – Good for puzzles that test a model’s ability to identify patterns without explicit instructions.
PopQA – Useful for stress-testing how well a model retains or “forgets” entity-specific information over multiple turns in conversation.
HumanEval, BigCodeBench – If you need to see how your model handles code generation or code QA.
IfEval-OOD, HREF, BigBenchHard, DROP – More specialized sets that target out-of-distribution reasoning, reading comprehension, or advanced multi-step logic.
Evaluation Frameworks
Olmes – A new tool that simplifies loading, running, and reporting benchmarks on your model.
llm-evaluation-harness by EleutherAI – One of the most established frameworks, supporting a ton of datasets and easy to customize for your own data.
I’m sure I’m missing some. If you know of any, please let me know. For now I think this puts one in a good position to start clicking around and researching which of these datasets are most relevant to their use case. From there, you can use oen of the evaluation frameworks to run through the curated dataset.
If you want to see a model “think” on your local machine, here is a quickstart:
Download a distilled version of the model: ollama run hf.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF:Q8_0
Chat with the model there OR if you have the LLM CLI tool installed and the llm-ollama plugin (llm install --upgrade llm-ollama), you can run:
llm -m 'hf.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF:Q8_0'\
'Flagship Pioneering is a Venture Creation company that creates companies. Come up with 5 new company ideas for them'
I’ve found the outputs to be mediocre. It’s a distilled model, etc. etc. and a better comparison would be with their public API; however, it’s facsinating to see the model “think” on your local machine.
There has been a lot of chat about the new reasoning models, and how they are not chat models. I completely agree, and want to add my voice to the chorus.
The focus of this is that an o1 prompt should look a lot different than your typical chat:
Understanding this prompt anatomy is crucial because it fundamentally changes how organizations need to approach AI implementation. While the structure might make sense to technical users, rolling this out across an enterprise presents unique challenges. The shift from quick chat-style interactions to detailed, structured briefs impacts everything from user training to workflow design.
The enterprise challenge with o1 isn’t just about adopting new tech - it’s about fundamentally changing how people work with AI. While chat models let users dive in with quick questions and iterate, o1 demands what I’d call “front-loaded effort” - you need to dump ALL the context upfront and carefully frame what you want.
This creates an interesting tension for enterprise adoption:
On the upside, o1’s report-style outputs actually align really well with enterprise needs. You get structured, thorough analysis that reads like a proper business document rather than a casual chat. Perfect for decision-making and documentation.
But here’s the catch - teaching busy enterprise users to write these detailed briefs is tough. They’re used to the quick back-and-forth of chat models or traditional tools. Now we’re asking them to:
Front-load all context (which means gathering it first)
Clearly define outputs (no vague requests)
Wait longer for responses (potentially 5+ minutes)
For enterprise rollouts, I think this means:
Training needs to shift from “how to chat with AI” to “how to brief AI”
Expectations around response times need resetting (not usually a big deal)
Best practices around context gathering need development
The real kicker? Just when enterprises were getting comfortable with chat-based AI, this paradigm shift forces another round of change management. It’s like teaching someone a new language right after they got comfortable with the first one.
To make this work, enterprises might need dedicated “AI prompt engineers” - people who can bridge the gap between users and these more demanding but powerful models. Think of them as technical writers for the AI age. If not dedicated people, then companies could consider dedicated projects and engagements focused on bringing reasoning prompts to business users.
Additionally, it’d be helpful to start sharing the art of the possible for business users with reasoning models like o1. Let me share three practical examples where business users could leverage reasoning models effectively:
Quarterly Report Analysis: Instead of asking quick questions about numbers, dump the entire quarterly spreadsheet, previous reports, and industry context into the model and ask for a comprehensive analysis. The model can identify trends, flag concerns, and create executive summaries - all in one thorough shot. Much better than piecemeal analysis through chat.
Meeting Summary & Action Plans: Take a raw meeting transcript, add the project background, team structure, and goals, then ask the model to create a structured output with: key decisions, action items, risks identified, and next steps. The model’s ability to process all this context at once means better synthesis than parsing piece by piece.
Policy Compliance Review: Perfect for legal or HR teams. Feed in your company policies, industry regulations, and a proposed new process or policy. The model can do a thorough gap analysis, identifying compliance risks and suggesting specific updates. Much more reliable than trying to check compliance point-by-point through chat. Plus, the model’s formal report style matches the serious nature of compliance work.
RFP Response Analysis: For procurement or sales teams, dump in the entire RFP document, your company’s past proposals, competitor intel, and pricing strategy. Ask for a detailed analysis of what sections need focus, suggested win themes, pricing recommendations, and potential red flags. The model’s ability to process all this context at once helps create a cohesive strategy instead of answering one requirement at a time.
The key theme here? These aren’t quick Q&A tasks - they’re meaty problems where the user invests time upfront to get comprehensive, actionable insights in return. Think “weekly deep-dive” rather than “quick daily check.”
Bottom line: Treat o1 like the powerful reasoning engine it is - with proper training and support - and it’ll transform your business. Treat it like ChatGPT, and you will likely struggle with user frustration and
poor results.
I came across Chip Huyen’s blog via twitter today. Her blog is great and has a lot of content I’m going to read. I’ll start with this post on Agents.
Chip defines Agents as:
An agent is anything that can perceive its environment and act upon that environment. Artificial Intelligence: A Modern Approach (1995) defines an agent as anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.
She then explains that the interaction between an agent and the environment is the key to the agent’s success. If you build a web scraping agent, the internet is the environment. If you build an agent that can play Minecraft, the Minecraft world is the environment. And so on.
Planning is the heart of an agent, and in the blog, Chip provides a lot of detail on it. Planning is decoupled from execution. The plan requires that the model understands the user’s intent and requires that the model breaks the task down into smaller subtasks. You can do a lot to help improve the likelihood of success - asking for human feedback, writing great system prompts, giving better descriptions of the tools available to the agent, use a stronger model, or even fine tune a model.
To me it seems that with the current state of LLMs, the most effective manner to improve the planning aspect of an agent is to ask for human feedback. There may be veritcals, like coding or math, where the plan is easier for an LLM to generate, but for the long tasks, I think asking for human feedback is the best way to improve the agent. I wonder if we’ll ever see the reasoning models (o1, o3, etc) allow users to adjust hte plan on the fly. I know that Google’s Gemini 1.5 Deep Research has a “refine” feature that allows you to adjust the plan already.
The post is great, and the blog looks promising. I’ll be back to Chip’s writing!
When I was checking out the new Phi model with Ollama. I came across the LLM Ollama plugin. It reminded me that Ollama now supports vision models.One of the use cases I’ve been thinking about is script that can run against all images in my downloads folder and generate a description of each image. Many of them have obscure names, and I’d like to be able to search them by description.
Here is my attempt to do this. The script is as simple as:
for file in ~/Downloads/*.{jpg,jpeg,png,gif,webp}; do echo "Processing $file" llm -m llama3.2-vision:11b "Describe this image." -a "$file"done
To run this, you’ll need to install LLM and Ollama. Then you’ll need to install the llama3.2-vision model:
ollama pull llama3.2-vision:11b
and install the plugin:
llm install --upgrade llm-ollama
This is remarkably simple and produces really helpful descriptions.
The image shows a man running across the finish line of a triathlon, with a red archway and a crowd of people watching. The purpose of the image is to capture the moment of triumph for the athlete as he completes his race.
* A man:
+ Wearing a black triathlon suit
+ Running towards the camera
+ Has a number on his bib
+ Appears to be exhausted but determined
* A finish line:
+ Red archway with white text that reads "FINISH"
+ Surrounded by spectators and officials
+ Located at the end of a road or track
* A crowd of people:
+ Standing behind barriers, watching the athlete cross the finish line
+ Cheering and taking photos
+ Dressed in casual clothing, with some wearing team jerseys
The image conveys a sense of excitement and accomplishment, as the athlete reaches the end of his grueling triathlon. The crowd's enthusiasm adds to the celebratory atmosphere, making it clear that this is a momentous occasion for all involved.
It can obviously be modified to run against any file or folder that you want. I could imagine doing some intelligent document processing on this to help group similar files together and cleaning up my downloads folder.
Lastly, if you prefer a web interface for more adhoc usage. You can use Open WebUI and uvx to run it locally:
uvx --python 3.11 open-webui serve
This will automatically find the Ollama models you have, including your image model.
Ever find yourself staring at your terminal, struggling to write the perfect git commit message? I recently added a game-changing alias to my zshrc that leverages the power of LLMs to generate commit messages automatically. Using Simon Willison’s llm CLI tool – a Swiss Army knife for interacting with language models directly from your terminal – this alias pipes your git diff through an AI model to generate concise, descriptive commit messages. It’s like having a pair programmer who’s really good at documentation sitting right next to you. Implementation is as simple as adding a single line to your zshrc, and just like that, you’ve automated one of development’s most tedious tasks. While it might feel a bit like cheating, I’ve found the generated messages are often more consistent and informative than my manual ones, especially during those late-night coding sessions.
alias gitmessage='git diff | llm " Below is a diff of all staged changes, coming from the command:
\`\`\`
git diff
\`\`\`
Please generate a concise, one-line commit message for these changes."'