NHacker Next
login
▲Chrome's New Embedding Model: Smaller, Faster, Same Qualitydejan.ai
28 points by kaycebasques 8 hours ago | 4 comments
Loading comments...
jbellis 5 hours ago [-]
TIL that Chrome ships an internal embedding model, interesting!

It's a shame that it's not open source, unlikely that there's anything super proprietary in an embeddings model that's optimized to run on CPU.

(I'd use it if it were released; in the meantime, MiniLM-L6-v2 works reasonably well. https://brokk.ai/blog/brokk-under-the-hood)

vessenes 4 hours ago [-]
Agreed! On open source though - can't you just pull the model and use the weights? I confess I have no idea what the licensing would be for an open source-backed browser deploying weights, but it seems like unless you made a huge amount of money off it, it would be unproblematic, and even then could be just fine.
darepublic 3 hours ago [-]
> Yes – Chromium now ships a tiny on‑device sentence‑embedding model, but it’s strictly an internal feature.

What it’s for “History Embeddings.” Since ~M‑128 the browser can turn every page‑visit title/snippet and your search queries into dense vectors so it can do semantic history search and surface “answer” chips. The whole thing is gated behind two experiments:

^ response from chatgpt

pants2 3 hours ago [-]
What does Chrome use embeddings for?
4 hours ago [-]