langchain-core==1.2.24
<p>Changes since langchain-core==1.2.23</p> <p>release(core): 1.2.24 (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="4189355108" data-permission-text="Title is private" data-url="https://github.com/langchain-ai/langchain/issues/36434" data-hovercard-type="pull_request" data-hovercard-url="/langchain-ai/langchain/pull/36434/hovercard" href="https://github.com/langchain-ai/langchain/pull/36434">#36434</a>)<br> feat(core): impute placeholder filenames for OpenAI file inputs (<a class="issue-link js-issue-link" data-error-text="Failed to load title" data-id="4188835085" data-permission-text="Title is private" data-url="https://github.com/langchain-ai/langchain/issues/36433" data-hovercard-type="pull_request" data-hovercard-url="/langchain-ai/langchain/pull/
Changes since langchain-core==1.2.23
release(core): 1.2.24 (#36434) feat(core): impute placeholder filenames for OpenAI file inputs (#36433) chore: pygments>=2.20.0 across all packages (CVE-2026-4539) (#36385) fix(core): add "computer" to WellKnownOpenAITools (#36261)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
releasegithublangchain
Intel B70 with Qwen3.5 35B
Intel recently released support for Qwen3.5: https://github.com/intel/llm-scaler/releases/tag/vllm-0.14.0-b8.1 Anyone with a B70 willing to run a lllama benchy with the below settings on the 35B model? uvx llama-benchy --base-url $URL --model $MODEL --depth 0 --pp 2048 --tg 512 --concurrency 1 --runs 3 --latency-mode generation --no-cache --save-total-throughput-timeseries submitted by /u/Fmstrat [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI


What happened to MLX-LM? What are the alternatives?
Support seems non-existent and the last proper release was over a month ago. Comparing with llama.cpp, they are just miles different in activity and support. Is there an alternative or should just use llama.cpp for my macbook? submitted by /u/Solus23451 [link] [comments]

Fine-tuned Gemma 4 E4B for structured JSON extraction from regulatory docs - 75% to 94% accuracy, notebook + 432 examples included
Gemma 4 dropped this week so I fine-tuned E4B for a specific task: extracting structured JSON (doc type, obligations, key fields) from technical and regulatory documents. https://preview.redd.it/v7yg80prpetg1.png?width=1026 format=png auto=webp s=517fb50868405f90a94f60b54b04608bcedd2ced Results on held-out test set: - doc_type accuracy: 75% base → 94% fine-tuned - Hallucinated obligations: 1.25/doc → 0.59/doc - JSON validity: 100% - Field coverage: 100% Setup: - QLoRA 4-bit, LoRA r=16 alpha=16, Unsloth + TRL - 432 training examples across 8 doc types - 5 epochs on a single L4, ~10 min training time - Final train loss 1.04, eval loss 1.12 The whole thing is open: notebook, dataset, serve.py for FastAPI inference. https://github.com/spriyads-vault/gemma4-docparse Some things I learned the ha

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!