Designing FSMs Specifications from Requirements with GPT 4.0
arXiv:2603.29140v1 Announce Type: cross Abstract: Finite state machines (FSM) are executable formal specifications of reactive systems. These machines are designed based on systems' requirements. The requirements are often recorded in textual documents written in natural languages. FSMs play a crucial role in different phases of the model-driven system engineering (MDE). For example, they serve to automate testing activities. FSM quality is critical: the lower the quality of FSM, the higher the number of faults surviving the testing phase and the higher the risk of failure of the systems in production, which could lead to catastrophic scenarios. Therefore, this paper leverages recent advances in the domain of LLM to propose an LLM-based framework for designing FSMs from requirements. The f
View PDF HTML (experimental)
Abstract:Finite state machines (FSM) are executable formal specifications of reactive systems. These machines are designed based on systems' requirements. The requirements are often recorded in textual documents written in natural languages. FSMs play a crucial role in different phases of the model-driven system engineering (MDE). For example, they serve to automate testing activities. FSM quality is critical: the lower the quality of FSM, the higher the number of faults surviving the testing phase and the higher the risk of failure of the systems in production, which could lead to catastrophic scenarios. Therefore, this paper leverages recent advances in the domain of LLM to propose an LLM-based framework for designing FSMs from requirements. The framework also suggests an expert-centric approach based on FSM mutation and test generation for repairing the FSMs produced by LLMs. This paper also provides an experimental analysis and evaluation of LLM's capacities in performing the tasks presented in the framework and FSM repair via various methods. The paper presents experimental results with simulated data. These results and methods bring a new analysis and vision of LLMs that are useful for further development of machine learning technology and its applications to MDE.
Subjects:
Software Engineering (cs.SE); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Formal Languages and Automata Theory (cs.FL)
Cite as: arXiv:2603.29140 [cs.SE]
(or arXiv:2603.29140v1 [cs.SE] for this version)
https://doi.org/10.48550/arXiv.2603.29140
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Omer Nguena Timo [view email] [v1] Tue, 31 Mar 2026 01:42:25 UTC (265 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannounceproduct
Long Term AI Memory by creator of Apache Cassandra
cortexdb.ai CortexDB is the long-term memory layer for AI systems — The problem is fundamental: today's AI agents are stateless. Every conversation starts from zero. The dominant approach to giving AI memory — having an LLM rewrite and merge your data on every single write — is lossy, fragile, and ruinously expensive. The LLM decides what to keep and what to throw away, replaces the original with a summary, and that decision is irreversible. Information it deemed unimportant today may be exactly what a future query needs tomorrow. CortexDB takes a fundamentally different approach: every piece of information is appended to an immutable event log and never overwritten. A lightweight LLM extracts entities and relationships asynchronously, but the original data is always preserved — if the ext

Prologue: After We No Longer Write Code by Hand, What Remains for Engineers?
1. A Question We Can No Longer Avoid See Figures 0-1 and 0-2 in this chapter. Over the past decade, software engineers have had a broadly stable understanding of themselves. We proved our value by writing implementations, reading systems, fixing bugs, refactoring, and aligning team collaboration. Even as job specialization became more detailed, that central image did not change: an engineer was, first of all, someone who personally built complex things. But once agents began to enter real development workflows, that image was quietly unsettled. Code implementation, test scaffolding, documentation patches, simple regressions, fault reproduction, and localized fixes—more and more steps that once depended on human hands began to be handed over to models. The change is uneven and far from comp

Agents Can Pay. That's Not the Problem.
On April 2, 2026, the x402 Foundation launched under the Linux Foundation. The founding members included Visa, Mastercard, American Express, Stripe, Coinbase, Cloudflare, Google, Microsoft, AWS, Adyen, Fiserv, Shopify, and a dozen others. Twenty-three organizations representing essentially the entire payments industry signed up on day one. The announcement celebrated something real: the agent payment problem is, for practical purposes, solved. Any AI agent on the planet can now send a payment to any resource that accepts x402. The plumbing is done. This is worth sitting with, because it changes the nature of the problem. If the question was "can agents pay?" — x402 answers it. If the question was "will the payment networks support this?" — 23 members of the Linux Foundation answer it. If t
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Long Term AI Memory by creator of Apache Cassandra
cortexdb.ai CortexDB is the long-term memory layer for AI systems — The problem is fundamental: today's AI agents are stateless. Every conversation starts from zero. The dominant approach to giving AI memory — having an LLM rewrite and merge your data on every single write — is lossy, fragile, and ruinously expensive. The LLM decides what to keep and what to throw away, replaces the original with a summary, and that decision is irreversible. Information it deemed unimportant today may be exactly what a future query needs tomorrow. CortexDB takes a fundamentally different approach: every piece of information is appended to an immutable event log and never overwritten. A lightweight LLM extracts entities and relationships asynchronously, but the original data is always preserved — if the ext

Я собрал AI бота за вечер - он уже продаёт
Можно ли за один вечер создать AI бота, который начнёт продавать за вас? Да, и я это сделал. Бот не только не спит и не болеет, но и приносит деньги. Я расскажу, как и почему решил заменить живого менеджера на умную программу и какие результаты это принесло. Проблема с менеджерами очевидна: они могут заболеть, уволиться или просто не выйти на работу. Это обычная ситуация, но меня она не устраивала. Я хотел стабильности и предсказуемости. Бот работает без перебоев, и его месячная стоимость в 20-50 раз ниже, чем зарплата сотрудника. И самое главное - он не хамит клиентам, даже если клиент не прав. Я собрал как создать AI бота-промпты в PDF. Забери бесплатно в Telegram (в закрепе): https://t.me/yevheniirozov Какой стек выбрать, чтобы за вечер собрать бота? Я остановился на Python и библиотеке



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!