Dev

Governments not keen on pushing citizen-facing AI services, for obvious reasons


Gartner says governments remain wary of AI-enabled citizen-facing services, while cybersecurity experts at NordVPN have warned against getting too chatty with chatbots.

Chatbots are commonplace nowadays, to the point where sport can be had in persuading the poor things to spout nonsense that their operators would prefer they didn’t. DPD’s infamous customer support chatbot gone wild is a case in point.

Yet although commercial entities might be charging headlong into customer-facing AI-powered experiences, researchers at Gartner have reported that by 2027, less than 25 percent of government organizations will have citizen-facing services powered by the technology.

The reasons behind governments’ reluctance to move AI from the back office to the front are varied. Dean Lacheca, VP analyst at Gartner, said: “A lack of empathy in service delivery and a failure to meet community expectations will undermine public acceptance of GenAI’s use in citizen-facing services.”

Although it should be noted that humans working in government services can be just as able to demonstrate a lack of empathy and a failure to meet expectations as any generative AI service.

Lacheca said governments have benefited from using more mature technology for years, “risk and uncertainty are slowing GenAI’s adoption at scale, especially the lack of traditional controls to mitigate drift and hallucinations.”

This means that it’s easier to focus on internal processes rather than risk a chatbot talking trash as a citizen-facing service. According to Gartner, human-centered design is essential.

Just perhaps not too human maybe. According to a recent Infobip survey highlighted by cybersecurity experts at NordVPN, some users can overshare with chatbots and divulge confidential information to trigger a response or create an imagined connection with the AI.

The survey reports that almost 20 percent of Americans have flirted with a chatbot, although nearly half of those insisted they were only poking the service to see what it would come out with.

Adrianus Warmenhoven, a cybersecurity expert at NordVPN, said: “Customer support operators used to be a filter, understanding the domain and privacy risks and asking only for relevant and less sensitive information.”

According to Gartner, governments that use generative AI-enabled citizen-facing services risk violating data privacy regulations or providing misleading or inaccurate results.

“GenAI adoption by government organizations should move at a pace that is aligned to their risk appetite to ensure that early missteps in the use of AI don’t undermine community acceptance of the technology in government service delivery,” said Lacheca. ®



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.