Knowledge database development by large language models for countermeasures against viruses and marine toxins
arXiv:2603.29149v1 Announce Type: new Abstract: Access to the most up-to-date information on medical countermeasures is important for the research and development of effective treatments for viruses and marine toxins. However, there is a lack of comprehensive databases that curate data on viruses and marine toxins, making decisions on medical countermeasures slow and difficult. In this work, we employ two large language models (LLMs) of ChatGPT and Grok to design two comprehensive databases of therapeutic countermeasures for five viruses of Lassa, Marburg, Ebola, Nipah, and Venezuelan equine encephalitis, as well as marine toxins. With high-level human-provided inputs, the two LLMs identify public databases containing data on the five viruses and marine toxins, collect relevant information
View PDF
Abstract:Access to the most up-to-date information on medical countermeasures is important for the research and development of effective treatments for viruses and marine toxins. However, there is a lack of comprehensive databases that curate data on viruses and marine toxins, making decisions on medical countermeasures slow and difficult. In this work, we employ two large language models (LLMs) of ChatGPT and Grok to design two comprehensive databases of therapeutic countermeasures for five viruses of Lassa, Marburg, Ebola, Nipah, and Venezuelan equine encephalitis, as well as marine toxins. With high-level human-provided inputs, the two LLMs identify public databases containing data on the five viruses and marine toxins, collect relevant information from these databases and the literature, iteratively cross-validate the collected information, and design interactive webpages for easy access to the curated, comprehensive databases. Notably, the ChatGPT LLM is employed to design agentic AI workflows (consisting of two AI agents for research and decision-making) to rank countermeasures for viruses and marine toxins in the databases. Together, our work explores the potential of LLMs as a scalable, updatable approach for building comprehensive knowledge databases and supporting evidence-based decision-making.
Comments: Clearance: 26-T-0967 (DOW)
Subjects:
Artificial Intelligence (cs.AI); Databases (cs.DB)
Report number: LA-UR-26-22203
Cite as: arXiv:2603.29149 [cs.AI]
(or arXiv:2603.29149v1 [cs.AI] for this version)
https://doi.org/10.48550/arXiv.2603.29149
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Hung Do [view email] [v1] Tue, 31 Mar 2026 01:55:31 UTC (1,245 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modellanguage modelannounce‘That’s a great point!’: Overly agreeable AI models shown to harm people’s judgment - Palo Alto Online
<a href="https://news.google.com/rss/articles/CBMiywFBVV95cUxOa1ZrSUQyY0JEbXEtUDFveWVUMV9SOWxZd05LM1AtOEFkc3d0QlN1X0RuSzd1RGNSM3BCN0pITlpCRUl5UmhWaWpGTXE0Q0ZWcFZqRTA2X1dEcERldk1wZnVWR2hXdGtKUDV0cmxQTVVBNVFDc1FLNXpWM3BYeEI3UE5QQWtvVmhtSmFsV0pqdF9feVhzVHRTbGtuTGNqNjJubGFNWjJ4d2lpUVFtOFA2cm1zYklkQW9vZkRDS2p3blhkaHpZWHItYlIwQQ?oc=5" target="_blank">‘That’s a great point!’: Overly agreeable AI models shown to harm people’s judgment</a> <font color="#6f6f6f">Palo Alto Online</font>
OpenAI’s Fund Raise Shows ChatGPT Parent Worth $852 Billion Ahead of IPO. Who Bought. - Barron's
<a href="https://news.google.com/rss/articles/CBMijANBVV95cUxNcTNJVWFxMVgyQzEwM1dFakZkc0V1NWhpM2JSV2RLdkNzM1M3eG5ETklfeFJ5ajlFUzcwUG4xZUVnV094dVhaaks5LU9wcUFlVllUV25TVURhcmtWS0FpTjVKYzJBWVllTzdVVTU2WFpNWHpaUDRWT0cyNEJ6d3JKbHMwaDhPaDdQWF82WUpQQndwc3g5V09jQ200VXhfX29UNzJDdnd3SEFfTkxscHdHTzFHNGpvUXFkcERmcGRSWEJPMV9UcWRMOTJlQi1MWWpoNy1YX0Q1aTN3aThZQTktSFdGQUZrbTd4UDJFMG5pdWRZVFBibl91cUFJRFNRNGFza2dKWDNLLWl4MkU0Z1gyektLSzBQdE42dHlrVzVOUFF3N1YwdktXc2VtQ0RHaGRsZkdFZ3Z5VGtVWExQcXdKLTJZZ2dDeTRVSG5UTmVQZTVWMXZ4T2Zpb2VhenM5Y195cGppMklwV2hocGtLUXIyNlhTOG1jS0ZfTFhZZEhqRkxRU1RSVHoxbkUySEE?oc=5" target="_blank">OpenAI’s Fund Raise Shows ChatGPT Parent Worth $852 Billion Ahead of IPO. Who Bought.</a> <font color="#6f6f6f">Barron's</font>
Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT - wsj.com
<a href="https://news.google.com/rss/articles/CBMiogNBVV95cUxQQmRLN1A3MFpjNU9aWFpCV2Z5aVowb0p5SnN2M0RVcE1nYmttWllNREVuY0JPX3dkTVNyaGFKelpMT0ZQVnA4VlB1dl84VDgtczFpTXpWVDhLTmk3OC1WOWF3eFNFdU9zYmxWNE5DNjI5SXJYTzkyVzZyd1djOXlSWEZuTXJlQWI0c2xPOWJXNWJqT3VoQW1lREJtZWVJbzFSQmxRSzVJX09feXJPY1VqNnlSaDlhdUw2TmdHV2NMci1pa2Y5NVhMamVMQmlsR3FzN2w5OWhkUGctQUNhNE9XbTVZTjM3Q09ZN1RlVnBZNmdwRGt0Y3h2MkxqbEdJNFZtYzRMSWQ1Z0dUU1VmcTNIdk44VEVHSk9JU2FLS3BMWVlLdEZJRnF5bHkzTEVsZHFrVXJmMzlnak4wWkJCZkE3OGw2ekh3LUpXcFdyZVh2VEpVS1Nsc2ZKcG5LREpOaFhaekpoMEJmV3JmU3RHZmthUFZ1V1pfSUdjSzNuaFZwQ2I2ZkxhY3cxT3AtUXdzVkhwUEZTZl92OHJBbnRJaU5nbWNn?oc=5" target="_blank">Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT</a> <font color="#6f6f6f">wsj.com</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
‘That’s a great point!’: Overly agreeable AI models shown to harm people’s judgment - Palo Alto Online
<a href="https://news.google.com/rss/articles/CBMiywFBVV95cUxOa1ZrSUQyY0JEbXEtUDFveWVUMV9SOWxZd05LM1AtOEFkc3d0QlN1X0RuSzd1RGNSM3BCN0pITlpCRUl5UmhWaWpGTXE0Q0ZWcFZqRTA2X1dEcERldk1wZnVWR2hXdGtKUDV0cmxQTVVBNVFDc1FLNXpWM3BYeEI3UE5QQWtvVmhtSmFsV0pqdF9feVhzVHRTbGtuTGNqNjJubGFNWjJ4d2lpUVFtOFA2cm1zYklkQW9vZkRDS2p3blhkaHpZWHItYlIwQQ?oc=5" target="_blank">‘That’s a great point!’: Overly agreeable AI models shown to harm people’s judgment</a> <font color="#6f6f6f">Palo Alto Online</font>
Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT - wsj.com
<a href="https://news.google.com/rss/articles/CBMiogNBVV95cUxPZF9qcGNvaDRzR2EwZXJFdDdtd0ZCN3VzM3lCOHJzNFdobUFVTGdLcnFBajlZOWZGcmRQdFhhOXNON1JycUw2aW9vT1hobEJ0ZVRFd0IxWmFDbXlod2hzbjRza3ZDdVBua1ZHYXRBaU0wZU1XTmFDWHF1ajlIVWxJMkRHc3pFY1BKaHRxbnB0M0RoUGo2RXZDOGt3cnQ1cm5MaGRxLWZpWHdfMzl6dkIzTXJ5VEQ2VGtnMjVFMEJxcTFrWXl0REt0WndFd0dDRUdILU8tRURVekNuaUY0dEF3UFlnYTYtcWRpOC1EWU9rbGw1Z2ZLSERtdjYzV0g2YjlmODE3aFRrZFctUGpTYzZaMHJXdllFVGdpdDBWM3BmX1FLOElZMWd1bjh0b0hxcXdWZGFfN09jd2wxY01QczdHWllEaWNqUkViMkd6RF9FYlVXWklKT3NoeW9MTThBUm96LWpEODIyQUcyM1JsY0ZyMnpDTjM5NERTQWRlaDV1UFZZcEE1M19ZLVA5YWtkU2VQLW8xelBocVdCN0Z2UnB6eG1n?oc=5" target="_blank">Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT</a> <font color="#6f6f6f">wsj.com</font>
Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT - wsj.com
<a href="https://news.google.com/rss/articles/CBMiogNBVV95cUxQQmRLN1A3MFpjNU9aWFpCV2Z5aVowb0p5SnN2M0RVcE1nYmttWllNREVuY0JPX3dkTVNyaGFKelpMT0ZQVnA4VlB1dl84VDgtczFpTXpWVDhLTmk3OC1WOWF3eFNFdU9zYmxWNE5DNjI5SXJYTzkyVzZyd1djOXlSWEZuTXJlQWI0c2xPOWJXNWJqT3VoQW1lREJtZWVJbzFSQmxRSzVJX09feXJPY1VqNnlSaDlhdUw2TmdHV2NMci1pa2Y5NVhMamVMQmlsR3FzN2w5OWhkUGctQUNhNE9XbTVZTjM3Q09ZN1RlVnBZNmdwRGt0Y3h2MkxqbEdJNFZtYzRMSWQ1Z0dUU1VmcTNIdk44VEVHSk9JU2FLS3BMWVlLdEZJRnF5bHkzTEVsZHFrVXJmMzlnak4wWkJCZkE3OGw2ekh3LUpXcFdyZVh2VEpVS1Nsc2ZKcG5LREpOaFhaekpoMEJmV3JmU3RHZmthUFZ1V1pfSUdjSzNuaFZwQ2I2ZkxhY3cxT3AtUXdzVkhwUEZTZl92OHJBbnRJaU5nbWNn?oc=5" target="_blank">Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT</a> <font color="#6f6f6f">wsj.com</font>

Your agent's guardrails are suggestions, not enforcement
<p>Yesterday, Anthropic's Claude Code source code leaked. The entire safety system for dangerous cybersecurity work turned out to be a single text file with one instruction: <em>"Be careful not to introduce security vulnerabilities."</em></p> <p>That is the safety layer at one of the most powerful AI companies in the world. Just a prompt asking the model nicely to behave.</p> <p>This is not a shot at Anthropic. It is a symptom of something the whole industry is dealing with right now. We have confused guidance with enforcement, and as agents move into production, that distinction is starting to matter a lot.</p> <h2> Why prompt guardrails feel like they work </h2> <p>When you are building an agent in development, prompt-based guardrails seem totally reasonable. You write something like "ne
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!