Accelerate innovation: when Low-Code meets AI
Fujitsu I 11:16 am, 9th September

Innovation has long been the driving force behind progress across industries, reshaping how we work and increasing efficiency. In the past decade, the significant advancements in GPU processing and in memory architectures have shifted artificial intelligence from academic research into real-world applications. This transition became especially evident in early 2023, with the release of large language model (LLM)-based agents like ChatGPT, marking a turning point in the adoption of AI technologies.
These tools introduced generative AI to the public at scale. What was once experimental is now foundational, positioning generative AI as a core pillar of the modern digital ecosystem.
But how can organizations harness this technological leap to accelerate innovation and deliver real-world impact? José Antunes Martins, Tech Lead/Architect at Fujitsu Luxembourg explores that question.
From prompt to context engineering
A key strength of LLMs lies in their ability to produce contextually relevant responses from relatively simple prompts. This is made possible through two foundational techniques: Retrieval-Augmented Generation (RAG) and prompt engineering.
When a user asks a question, the model often needs more information than it was trained on – particularly domain-specific or up-to-date knowledge. This is where RAG plays a critical role. It uses the user’s query to search in external sources, such as document repositories and knowledge bases, passing the retrieved relevant content to the model. On the other hand, to ensures the model focuses on the right context and interprets information correctly, the prompt engineering structures the input, by crafting clear and specific instructions.
Together, these techniques have formed the backbone of how AI systems reason and respond. However, they still exhibit limitations — most notably, the risk of hallucination.
To mitigate such issues, a more comprehensive approach called context engineering has emerged. This evolution goes beyond crafting a single prompt; it involves selecting and assembling the right context, managing long-term memory, and integrating dynamic feedback. By doing so, context engineering enforces relevance and improves the alignment of model outputs with domain-specific requirements.
Open standards and interoperability
To address the growing interoperability demands of AI systems, a natural step is the adoption of open standards that enhance scalability, maintainability, and integration across AI ecosystems. In late 2024, Anthropic introduced the Model Context Protocol (MCP) — an open standard for connecting AI models to diverse data sources and tools. MCP replaces ad hoc, fragmented integrations with a consistent and extensible interface.
The protocol defines a set of core primitives that structure and enrich how models interact with their environment:
• Prompts: Templates or instructions that guide model behavior.
• Resources: Structured or unstructured data that provide additional context.
• Tools: Executable functions enabling the model to perform real-time actions or retrieve dynamic data.
While prompts and resources enhance the model's input context, tools enable two-way interaction — allowing models to call APIs, execute logic, or engage with external systems during inference.
Although proprietary alternatives exist — such as frameworks by OpenAI and LangChain — MCP distinguishes itself through its open, vendor-agnostic design, promoting ecosystem-wide collaboration and interoperability.
The rise of agentic AI
As large language models evolve into autonomous agents, we enter the era of Agentic AI —systems capable not only of responding to input but also of perceiving context, reasoning through multi-step tasks, taking actions, and adapting through feedback. These agents can independently initiate tasks, retrieve data, incorporate feedback loops, and interact naturally with users or other systems. This shift unlocks powerful new capabilities but also introduces complexity — particularly around orchestrating multiple agents in enterprise-scale environments.
To manage this, orchestration frameworks like LangGraph have emerged, supporting the design of stateful, long-running agents capable of executing complex, multi-step workflows. While not a full web stack, it integrates seamlessly with applications by exposing APIs, effectively connecting agent logic to frontend and backend systems.
Low-Code as a launchpad
In parallel, low-code platforms have evolved beyond rapid app development to become key enablers of agentic AI in enterprise environments. Platforms like OutSystems Developer Cloud (ODC) now embed generative and agentic AI capabilities directly into workflows via the Agent Workbench, allowing developers to visually configure intelligent agents that:
• Access structured and unstructured data via RAG
• Execute tool-based actions
• Maintain context across interactions
• Operate within governed, auditable workflows with human-in-the-loop
Among the many suitable use cases for the platform, fraud detection stands out — where agents can monitor transactions, identify anomalies, and escalate suspicious activity for human review. Another strong scenario is customer support, where agents manage inquiries, retrieve account information, and escalate complex issues.
To better support these use cases, the ODC platform can leverage open standards such as MCP to minimize interoperability challenges across the diverse core applications that comprise legacy infrastructure.
Fujitsu: bridging AI and Low-Code innovation
Fujitsu offers two distinct pathways to agentic AI, allowing enterprises to choose the approach that best aligns with their technical maturity and operational needs. For use cases requiring fine-grained control, LangGraph enables the development of stateful, multi-agent systems with persistent memory and advanced orchestration — ideal for complex backend integrations and highly customized workflows.
Alternatively, the OutSystems Developer Cloud (ODC), enhanced by the Agent Workbench, provides a visual, low-code environment for rapidly building enterprise-grade AI agents. These agents support RAG, access real-time data, interact with APIs, and operate within governed workflows featuring built-in observability and human oversight.
By offering either deep control with LangGraph or accelerated delivery with ODC, Fujitsu empowers organizations to adopt agentic AI at the right level of abstraction for their specific context — without forcing a one-size-fits-all approach.
With our deep expertise in AI, cloud and low-code ecosystems, Fujitsu empowers organizations to innovate and deliver AI-powered solutions at scale. Together, let’s build a smarter, faster and more impactful future.
Subscribe to our Newsletters

Stay up to date with our latest news
more news

Top 20 Jobs impacted by AI
by Business Elements I 2:00 pm, 26th August
In July 2025, Microsoft Research published a pioneering study titled “Working with AI: Measuring the Occupational Implications of Generative AI” (read the full study here). The paper represents one of the most data-driven looks to date at how AI is reshaping the labor market.
Red Hat et Meta unissent leurs forces pour une IA open source au service des entreprises
by Red Hat I 11:35 am, 29th July
Red Hat, leader mondial des logiciels open source, et Meta annoncent une collaboration pour rendre l’intelligence artificielle générative plus accessible, plus cohérente et surtout plus abordable pour les entreprises.
load more