AI in government: from experiment to essential public service engine

PwC Luxembourg I 9:23 am, 5th March

As governments across Europe move from experimenting with AI to embedding it at scale, the question is no longer if artificial intelligence will transform public services, but how to do so responsibly, securely, and with real societal impact. In this interview, Giovanna Galasso, Partner Industry & Public Sector at PwC Luxembourg, shares concrete examples, strategic insights, and a forward-looking vision on how AI and generative AI are already reshaping government operations.


In the context of Government and Public Services, where do you see AI and generative AI already delivering tangible efficiency gains today, and which use cases are proving the most impactful so far?

The most immediate and tangible efficiency gains come from automating knowledge intensive tasks across the public sector. Governments are already using AI to classify and summarise documents, draft responses, and enable intelligent assistants that help civil servants access accurate information instantly.

AI powered chatbots are also widely deployed to respond to citizen enquiries, guide users through administrative procedures, and improve speed and accuracy in service delivery. Concrete examples already exist across Europe. In Estonia, for instance, Bürokratt is a nationwide interoperable network of chatbots and AI agents that allows citizens to access public services through a single conversational interface. In Vienna, the WienKI platform supports more than 70,000 municipal employees by providing fast, reliable answers to operational questions and assisting with learning, creativity and problem solving. In Luxembourg, the Government IT Centre - CTIE - has demonstrated AI use cases that support translation, knowledge search, and software development while maintaining human oversight.


What are the key considerations and challenges specific to implementing AI and GenAI in the public sector — particularly in terms of governance, data sensitivity, regulation, and public trust?

In the public sector, AI must be compliant by design. With the EU AI Act already in force, public institutions are legally required to embed risk management, transparency, and AI literacy from the outset. This is not merely an ethical exercise; it directly impacts how solutions are procured, designed, deployed, and monitored.

Citizen data is extremely sensitive, which means governments must strictly limit data collection to what is necessary, use it only for its intended purpose, and retain full control within national and EU legal frameworks. Luxembourg has been particularly proactive by investing in sovereign and isolated cloud environments and high performance computing infrastructures, ensuring data sovereignty without compromising performance.

Trust and transparency are equally critical. Citizens must know when AI is involved in public decisions. This implies clear communication, proper labelling of AI generated content, publication of impact assessments, and continuous monitoring of bias, accuracy, and model quality. A further challenge is the informal use of generative AI by public servants. This makes AI literacy, clear internal policies, and shared standards essential to ensure responsible and consistent usage across administrations.


From your perspective, what distinguishes Luxembourg in its approach to AI and GenAI compared to other countries or regions?

Luxembourg stands out by treating AI as a strategic national project rather than a collection of isolated experiments. The country has aligned its data, AI, and quantum strategies under a single digital sovereignty vision, providing public institutions with a clear long term framework for responsible innovation.

A key differentiator is Luxembourg’s sovereign digital infrastructure, combining Meluxina supercomputer with fully sovereign government operated cloud environments. This allows advanced AI use while retaining full control over data, jurisdiction, and compliance.

Luxembourg also benefits from a strong GovTech and open innovation culture, actively bringing together startups, researchers, and public institutions to enable rapid pilots and learning cycles. Its focus on inclusion, multilingualism, and open data further reinforces this distinctive approach.


How should governments and public institutions measure the success and return on investment of AI initiatives, especially when outcomes are not purely financial?

In government, success must be measured beyond traditional financial ROI. Public institutions exist to create societal value. The real question is whether AI makes public services faster, fairer, more accessible, and more trustworthy.

Key indicators include reduced waiting times, lower backlogs, improved accuracy, and higher first contact resolution rates. Quality, equity, and transparency are equally important, as is ensuring AI does not introduce bias. Workforce impact is another critical dimension: successful AI initiatives free civil servants from repetitive tasks, allowing them to focus on higher value activities such as case analysis, citizen interaction, and policy development. These combined dimensions provide a much more meaningful picture of value creation.


If we look a few years ahead, what role do you believe AI and generative AI will play in reshaping how governments and public services operate?

AI has the potential to transform governments into more proactive, personalised, and accessible systems. Policy design and monitoring will become faster and more evidence driven, while service delivery will increasingly shift from reactive processing to anticipating citizen needs.

Multilingual digital assistants will enable clearer and more human centred interactions. Behind the scenes, AI will streamline analysis, coding, and case work, allowing civil servants to focus on judgement, empathy, and decision making. If implemented responsibly, AI can help governments deliver services that are faster, fairer, and more citizen centric without compromising trust.


As a Partner at PwC Luxembourg, how are you leveraging AI and GenAI internally, and how do you ensure initiatives remain value driven and scalable?

At PwC Luxembourg, we use AI to work smarter, not harder. AI agents help teams search knowledge faster, draft and review content consistently, and automate routine tasks, freeing time for higher value client work.

These initiatives are supported by strong governance. Every use case is risk assessed, human oversight is embedded, and AI literacy is promoted at all levels of the organisation. Importantly, we only scale what proves real value — measurable efficiency gains, improved quality, and deployment models that respect data sensitivity. This disciplined approach allows us to move from experimentation to responsible, value driven operationalisation.



Subscribe to our Newsletters

Info Message: By continuing to use the site, you agree to the use of cookies. Privacy Policy Accept