AI Capabilities and Climate Change

Obviously at one level the prompt is an argument and nothing more. Many readers would not see beyond it. This prompt is also in line with the trend of tailoring discussions and conversation toward 'fixed' outcomes -- I won, you lost. In terms of novelty and imagination, the prompt pleased this author. If the author did not acknowledge his own immediate gratification, it would be dishonest. But it is a bit more. Much more is revealed by this experiment and test. Yes, an experiment and test.

AI chat-bots have reasoning abilities and are portrayed as competent to perform complex tasks. In order to test AI chat-bot abilities to respond and reason, three of them were given roughly similar prompts-- Microsoft's Co-pilot, Open AI's GPT-4o, and Google's Gemini. According to numerous independent evaluations, they are the best of the lot. I intend to prompt Anthropic also, but you can do that as well.

Here is the prompt:

Proposition: Human activity has accelerated and contributed to climate change.

Proposition : Artificial intelligence is more efficient at human tasks.

Infer how adopting AI for an increasing number of human tasks will impact the climate.

One prompt used proposition; another used fact. The words affect and impact were interchanged, once. Gemini suggested a modification in the prompt for responding. The diacritic and exact words do matter, but overall sentence structure was of more consequence. There was bait, and AI chatbots took it. Avoided adding causality to human action for climate change in the prompt.

Image sourced from wordclouds.com
 
The responses avoided, excluded, and didn't factor in denials of climate change, though the prompts, I thought, purposefully didn't.

Overall, responses presented artificial intelligence as a tool, emphasising how they would actually be deployed as determining the impact/affect. To use Gemini's final words, it boils down to 'how AI is developed, deployed, powered, and regulated'. 

In common parlance, the way we give an example of knife. A Knife is a tool, and it can be used for many purposes, including to harm, maim and kill. Knife, as a comparative and illustrative example, is rehearsed by this author for context. The knife does not decide, and it all depends on how we use it. Would highlight and emphasize the data-driven ability of AI to decide and move forward to the next broad point. Unlike a knife, which takes no decision on its behalf, AI can and does. As you/we adopt and integrate AI, or factor its input in decision-making, will it decide on your/our behalf(.or?)😟
 


What is efficiency, and how to process and factor it in real-world scenarios engaged AI chatbots. AI responses factored efficiency with consumption and production. And outlined how, in certain conditions, AI can cut down human contributions that accelerate climate change. Efficiency and consumption are directly related; more efficiency usually means more consumption. With more leisure available and once efficient contributions to reduce climate change are achieved, it is still possible that humans could consume more. The 'rebound effect' was flagged as a scenario. Gemini named Jevons paradox. AI responses showed awareness that increased efficiency or AI is no guarantee for offsetting and mitigating climate change.
 
Got a sense that while responding to this prompt, AI chatbots also resorted to convoluted reasoning. Not unequivocally, though. Since, it tracks, stores, learns and intends to personalise from human interaction, other people may receive distinct response with the same prompt.

Proposition or fact number 2 wasn't examined with splitting the sides and analysis. Treated as fact. Once a LLM model is available for use it would retrieve through tokens a response. Or does it compute afresh? Efficiency, productivity and computational capacity are the same? How many calculations/computations are involved when the human brain processes an image? Or when it composes a sentence? How we factor efficient process/outcomes with number and speed of calculations?

The infrastructure required to set up and operate artificial intelligence—data centers, computers, electricity—was included in all three responses. Though the energy consumption and associated emissions necessary for operating, deploying, and working with AI were ranked low in responses. Gemini also included manufacturing AI hardware as part of the costs. There is an expanding literature on this aspect, and I have little to add. 

Do have very imaginative and likely unfeasible ideas. Like, why don't we build systems that generate 3-20 volts of electricity, not the usual 220-240 volts? Solar panels work best for space travel. Local generation of low- voltage production, a la solar panels for earth, is it possible? We need out- of-the- box ideas too to address climate change.

The responses used the words sustainable and renewable. While parsing their responses, it occurred to me that climate goals and priorities are not in-built in AI models.

Sustainable and renewable are more than mantras to chant. Can we build a new index where production and consumption is paired with existing sustainable and renewable resources? This index will reveal human excesses in building, navigating and building for navigating this world. And the degree and extent to which humans need to scale down.

Can we begin with a pledge? Henceforth, all infrastructure to build and run AI will be entirely based on sustainable and renewable resources?

Comments

Popular posts from this blog

The Artifice, Hype and Artificial Intelligence

Weather Prediction and Climate Change

Strategy and War in 21st century : Indo-pak updates