Sign up

Simon Willison's Weblog

Not verified No WebSub updates Supports Webmention Not yet validated

Author
Simon Willison
Public lists
Featured
Fetched

Simon Willison's Weblog Supports Webmention

Quoting Willy Tarreau

On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.

And we're now seeing on a daily basis something that never happened before: duplicate reports, or the same bug found by two different people using (possibly slightly) different tools.

Willy Tarreau, Lead Software Developer. HAPROXY

Tags: security, linux, generative-ai, ai, llms, ai-security-research

Simon Willison's Weblog Supports Webmention

Quoting Daniel Stenberg

The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.

I'm spending hours per day on this now. It's intense.

Daniel Stenberg, lead developer of cURL

Tags: daniel-stenberg, security, curl, generative-ai, ai, llms, ai-security-research

Simon Willison's Weblog Supports Webmention

Quoting Greg Kroah-Hartman

Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us.

Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real.

Greg Kroah-Hartman, Linux kernel maintainer (bio), in conversation with Steven J. Vaughan-Nichols

Tags: security, linux, generative-ai, ai, llms, ai-security-research

Simon Willison's Weblog Supports Webmention

Can JavaScript Escape a CSP Meta Tag Inside an Iframe?

Research: Can JavaScript Escape a CSP Meta Tag Inside an Iframe?

In trying to build my own version of Claude Artifacts I got curious about options for applying CSP headers to content in sandboxed iframes without using a separate domain to host the files. Turns out you can inject <meta http-equiv="Content-Security-Policy"...> tags at the top of the iframe content and they'll be obeyed even if subsequent untrusted JavaScript tries to manipulate them.

Tags: iframes, security, javascript, content-security-policy, sandboxing

Simon Willison's Weblog Supports Webmention

The Axios supply chain attack used individually targeted social engineering

The Axios team have published a full postmortem on the supply chain attack which resulted in a malware dependency going out in a release the other day, and it involved a sophisticated social engineering campaign targeting one of their maintainers directly. Here's Jason Saaym...

Simon Willison's Weblog Supports Webmention

Highlights from my conversation about agentic engineering on Lenny's Podcast

I was a guest on Lenny Rachitsky's podcast, in a new episode titled An AI state of the union: We've passed the inflection point, dark factories are coming, and automation timelines. It's available on YouTube, Spotify, and Apple Podcasts. Here are my highlights from our conve...

Simon Willison's Weblog Supports Webmention

Gemma 4: Byte for byte, the most capable open models

Gemma 4: Byte for byte, the most capable open models Four new vision-capable Apache 2.0 licensed reasoning LLMs from Google DeepMind, sized at 2B, 4B, 31B, plus a 26B-A4B Mixture-of-Experts. Google emphasize "unprecedented level of intelligence-per-parameter", providing yet ...

Simon Willison's Weblog Supports Webmention

llm-gemini 0.30

Release: llm-gemini 0.30

New models gemini-3.1-flash-lite-preview, gemma-4-26b-a4b-it and gemma-4-31b-it. See my notes on Gemma 4.

Tags: gemini, llm, gemma

Simon Willison's Weblog Supports Webmention

March 2026 sponsors-only newsletter

I just sent the March edition of my sponsors-only monthly newsletter. If you are a sponsor (or if you start a sponsorship now) you can access it here. In this month's newsletter:

  • More agentic engineering patterns
  • Streaming experts with MoE models on a Mac
  • Model releases in March
  • Vibe porting
  • Supply chain attacks against PyPI and NPM
  • Stuff I shipped
  • What I'm using, March 2026 edition
  • And a couple of museums

Here's a copy of the February newsletter as a preview of what you'll get. Pay $10/month to stay a month ahead of the free copy!

Tags: newsletter

Simon Willison's Weblog Supports Webmention

datasette-llm 0.1a6

Release: datasette-llm 0.1a6

  • The same model ID no longer needs to be repeated in both the default model and allowed models lists - setting it as a default model automatically adds it to the allowed models list. #6
  • Improved documentation for Python API usage.

Tags: llm, datasette

Simon Willison's Weblog Supports Webmention

datasette-enrichments-llm 0.2a1

Release: datasette-enrichments-llm 0.2a1

  • The actor who triggers an enrichment is now passed to the llm.mode(... actor=actor) method. #3

Tags: enrichments, llm, datasette

Simon Willison's Weblog Supports Webmention

datasette-extract 0.3a0

Release: datasette-extract 0.3a0

  • This plugin now uses datasette-llm to configure and manage models. This means it's possible to specify which models should be made available for enrichments, using the new enrichments purpose.

Tags: llm, datasette

Simon Willison's Weblog Supports Webmention

datasette-enrichments-llm 0.2a0

Release: datasette-enrichments-llm 0.2a0

  • This plugin now uses datasette-llm to configure and manage models. This means it's possible to specify which models should be made available for enrichments, using the new enrichments purpose.

Tags: llm, datasette

Simon Willison's Weblog Supports Webmention

datasette-llm-usage 0.2a0

Release: datasette-llm-usage 0.2a0

  • Removed features relating to allowances and estimated pricing. These are now the domain of datasette-llm-accountant.
  • Now depends on datasette-llm for model configuration. #3
  • Full prompts and responses and tool calls can now be logged to thellm_usage_prompt_log table in the internal database if you set the new datasette-llm-usage.log_prompts plugin configuration setting.
  • Redesigned the /-/llm-usage-simple-prompt page, which now requires the llm-usage-simple-prompt permission.

Tags: llm, datasette

Simon Willison's Weblog Supports Webmention

datasette-llm 0.1a5

Release: datasette-llm 0.1a5

  • The llm_prompt_context() plugin hook wrapper mechanism now tracks prompts executed within a chain as well as one-off prompts, which means it can be used to track tool call loops. #5

Tags: llm, datasette

Simon Willison's Weblog Supports Webmention

Quoting Soohoon Choi

I want to argue that AI models will write good code because of economic incentives. Good code is cheaper to generate and maintain. Competition is high between the AI models right now, and the ones that win will help developers ship reliable features fastest, which requires simple, maintainable code. Good code will prevail, not only because we want it to (though we do!), but because economic forces demand it. Markets will not reward slop in coding, in the long-term.

Soohoon Choi, Slop Is Not Necessarily The Future

Tags: slop, ai-assisted-programming, generative-ai, agentic-engineering, ai, llms

Simon Willison's Weblog Supports Webmention

Supply Chain Attack on Axios Pulls Malicious Dependency from npm

Supply Chain Attack on Axios Pulls Malicious Dependency from npm

Useful writeup of today's supply chain attack against Axios, the HTTP client NPM package with 101 million weekly downloads. Versions 1.14.1 and 0.30.4 both included a new dependency called plain-crypto-js which was freshly published malware, stealing credentials and installing a remote access trojan (RAT).

It looks like the attack came from a leaked long-lived npm token. Axios have an open issue to adopt trusted publishing, which would ensure that only their GitHub Actions workflows are able to publish to npm. The malware packages were published without an accompanying GitHub release, which strikes me as a useful heuristic for spotting potentially malicious releases - the same pattern was present for LiteLLM last week as well.

Via lobste.rs

Tags: javascript, security, npm, supply-chain

Simon Willison's Weblog Supports Webmention

datasette-llm 0.1a4

Release: datasette-llm 0.1a4

I released llm-echo 0.3 to provide an API key testing utility I needed for the tests for this new feature.

Tags: llm, datasette

Simon Willison's Weblog Supports Webmention

llm-all-models-async 0.1

Release: llm-all-models-async 0.1

LLM plugins can define new models in both sync and async varieties. The async variants are most common for API-backed models - sync variants tend to be things that run the model directly within the plugin.

My llm-mrchatterbox plugin is sync only. I wanted to try it out with various Datasette LLM features (specifically datasette-enrichments-llm) but Datasette can only use async models.

So... I had Claude spin up this plugin that turns sync models into async models using a thread pool. This ended up needing an extra plugin hook mechanism in LLM itself, which I shipped just now in LLM 0.30.

Tags: llm, async, python

Simon Willison's Weblog Supports Webmention

llm 0.30

Release: llm 0.30

  • The register_models() plugin hook now takes an optional model_aliases parameter listing all of the models, async models and aliases that have been registered so far by other plugins. A plugin with @hookimpl(trylast=True) can use this to take previously registered models into account. #1389
  • Added docstrings to public classes and methods and included those directly in the documentation.

Tags: llm

Simon Willison's Weblog Supports Webmention

llm-echo 0.4

Release: llm-echo 0.4

  • Prompts now have the input_tokens and output_tokens fields populated on the response.

Tags: llm

Simon Willison's Weblog Supports Webmention

llm-echo 0.3

Release: llm-echo 0.3

Tags: llm

Simon Willison's Weblog Supports Webmention

datasette-files 0.1a3

Release: datasette-files 0.1a3

I'm working on integrating datasette-files into other plugins, such as datasette-extract. This necessitated a new release of the base plugin.

  • owners_can_edit and owners_can_delete configuration options, plus the files-edit and files-delete actions are now scoped to a new FileResource which is a child of FileSourceResource. #18
  • The file picker UI is now available as a <datasette-file-picker> Web Component. Thanks, Alex Garcia. #19
  • New from datasette_files import get_file Python API for other plugins that need to access file data. #20

Tags: datasette

Simon Willison's Weblog Supports Webmention

datasette-llm 0.1a3

Release: datasette-llm 0.1a3

Adds the ability to configure which LLMs are available for which purpose, which means you can restrict the list of models that can be used with a specific plugin. #3

Tags: llm, datasette

Simon Willison's Weblog Supports Webmention

Quoting Georgi Gerganov

Note that the main issues that people currently unknowingly face with local models mostly revolve around the harness and some intricacies around model chat templates and prompt construction. Sometimes there are even pure inference bugs. From typing the task in the client to the actual result, there is a long chain of components that atm are not only fragile - are also developed by different parties. So it's difficult to consolidate the entire stack and you have to keep in mind that what you are currently observing is with very high probability still broken in some subtle way along that chain.

Georgi Gerganov, explaining why it's hard to find local models that work well with coding agents

Tags: coding-agents, generative-ai, ai, local-llms, llms, georgi-gerganov

Simon Willison's Weblog Supports Webmention

Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer

Trip Venturella released Mr. Chatterbox, a language model trained entirely on out-of-copyright text from the British Library. Here's how he describes it: Mr. Chatterbox is a language model trained entirely from scratch on a corpus of over 28,000 Victorian-era British texts ...

Simon Willison's Weblog Supports Webmention

llm-mrchatterbox 0.1

Release: llm-mrchatterbox 0.1

See Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer.

Tags: llm

Simon Willison's Weblog Supports Webmention

Pretext

Pretext Exciting new browser library from Cheng Lou, previously a React core developer and the original creator of the react-motion animation library. Pretext solves the problem of calculating the height of a paragraph of line-wrapped text without touching the DOM. The usual...

Simon Willison's Weblog Supports Webmention

Pretext — Under the Hood

Tool: Pretext — Under the Hood

See my notes on Pretext here.

Simon Willison's Weblog Supports Webmention

Python Vulnerability Lookup

Tool: Python Vulnerability Lookup

I learned that the OSV.dev open source vulnerability database has an open CORS JSON API so I had Claude Code build this HTML tool for pasting in a pyproject.toml or requirements.txt file (or name of a GitHub repo containing those) and seeing a list of all reported vulnerabilities from that API.

Tags: tools, python, supply-chain, vibe-coding, security