Not viewing this page correctly? Clear your browser cache!

Image: Jernej Furman (CC BY 2.0)

ICT

September 13, 2023

How ChatGPT Might Help the World’s Poorest and the Organizations that Work with Them

contributor:The Conversation

ChatGPT has been touted as a tool that is going to revolutionize the workplace and home. AI systems like it have the potential toenhance productivitybutcould also displace jobs. The ChatGPT website received1.5 billion visits last month.

Though no comprehensive statistics exist, these users are likely to be relatively educated, with access to smartphones or computers. So, can the AI chatbot also benefit people who don’t have all these advantages?

We are associated withFriend in Need India Trust (FIN), a non-governmental organisation (NGO) based in an isolated fishing village named Kameswaram in Tamil Nadu state. FIN wages a daily battle againstwomen’s lack of empowerment,pollutionand alack of functioning sanitation.

These problems and others act as key obstacles to local economic development. Recently a FIN colleague, Dr. Raja Venkataramani, returned from the US keen to discuss ChatGPT. He wondered whether the AI chatbot could help to create awareness, motivation and community engagement towards our sustainability goals in Kameswaram.

When ChatGPT was asked to justify the errors, it replied: “I apologize for any confusion. I provided a hypothetical statistic to illustrate the point.”

For one experiment, we worked with local women who are FIN staff members but didn’t have a high level of education. The women staff at FIN are local villagers grappling with patriarchal attitudes at home, who find it difficult to construct engaging arguments to motivate local people – especially boys and men – to conserve water, use toilets and not litter in public places.

We introduced ChatGPT to them as a tool to aid them in their lives and work. After installing it on their phones, they found it very helpful. ChatGPT acted like a companion and remembered what had been discussed previously.

FIN staff take their ocean littering campaign out into the local area. Photo: Shyama V Ramani (co-author)

One staff member wanted to use it to debate politics with her husband in the run-up to thestate elections. She asked ChatGPT what was good and bad about her preferred candidate and requested that it back this up with data.

She then repeated this for that politician’s opponent. She found the responses for both candidates were equally convincing. The staff member did not have the patience to check the veracity of the arguments and so ended up even more confused. This made her reluctant to use ChatGPT again.

Sam Altman, ChatGPT’s creator, along with other tech leaders in the US arecalling for regulationto contain the risks of AI hallucinations, which is when the technology generates false information that could trigger social tensions. We asked ChatGPT to produce a speech calling to quell a mob bent on carrying out honor killings.

However, those in favor of maintaining the status quo could also use the chatbot to justify their violent behavior to the community.

Even today, India remains plagued by communal violence such ashonour killingsagainst, for example, couples who marry outside their caste or young women who seek employment outside the village. ChatGPT proved to be a very effective speech writer, producing compelling arguments against these acts.

However, those in favor of maintaining the status quo could also use the chatbot to justify their violent behavior to the community. This might happen if they were seeking to retain their status within the village, countering any efforts to encourage community members to end the practices. We found that the AI system was just as adept at producing arguments in favor of honor killings.

In a different experiment, we aimed to see how the chatbot could help NGOs promote women’s empowerment – a central mission of FIN’s – in a way that could benefit the community. We asked ChatGPT to create a speech explaining the relevance ofInternational Women’s Dayto villagers.

The speech was very impressive, but it contained factual errors on sex ratios, the abortion of fetuses outside legal limits, and women’s participation in the workforce. When ChatGPT was asked to justify the errors, it replied: “I apologize for any confusion. I provided a hypothetical statistic to illustrate the point.”

FIN staff members discussing a campaign for women’s empowerment. Photo: Shyama V Ramani (co-author)

Pollution problem

In another experiment, we wanted to address the problem of pollution from traditional festivals in India. These frequently involve firecrackers and parties, which increase the levels of air and water pollution.

Though street theatre has previously been used successfullyto motivate behavioural change, neither FIN’s staff nor its mentors felt capable of writing a script. However, within three minutes of being fed the right prompt, ChatGPT came up with a skit involving young people.

It included both male and female characters, used local names and was mindful of local nuances. The FIN staff boosted the local character of the skit by inserting their own jokes into it. The short theatre piece argued that the impact on our oceans ofmicrofibres from synthetic clothingrepresents a significant environmental problem that can harm livelihoods.

Asking the AI

We asked ChatGPT for its opinion on our results. It asserted that ChatGPT can be a valuable tool for both economically disadvantaged people and NGOs because it provided valuable information, offered emotional support and made communication more effective.

Just as it can help us, it can also act as a capable speech writer for those who would seek to divide or raise tensions.

But the chatbot avoided discussing its obvious downsides, such as making arguments based on false, incomplete or imperfect information. Just as it can help us, it can also act as a capable speech writer for those who would seek to divide or raise tensions.

For the time being, ChatGPT seems like a handy tool for well-intentioned NGOs, but not so much for the ordinary individuals that they assist. Without users having the means to monitor the ethics and truthfulness of ChatGPT’s suggestions, AI systems could become dangerous enablers for disinformation and misinformation.The Conversation


About the Authors

Shyama V. Ramani is a Professorial Fellow, and Maximilian Bruder is a PhD Research Fellow, both at Maastricht Economic and Social Research Institute on Innovation and Technology (UNU-MERIT) at the United Nations University.

This article is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.

tags :AI,Artificial intelligence,SDGs

The Conversation

Leave a Comment

Sign Into comment.

    by engineers.
    for everyone.

    开云体育官方开云体育KG彩票E4C会员是一个策划体验!当你become a member, we will tailor a unique user profile for you based on the way you engage with our content over time. Your actions and preferences will allow us to serve you content that is most relevant to you. In addition, becoming an E4C member grants you access to exclusive engagement opportunities and the E4C newsletter.

    Join E4C and become a part of a global community that believes engineering can change the world!

    Become a Member
    Baidu
    map