haxxiy said:
That's how theater owners were defending themselves 95 years ago. You monster.
But the point I was making is a different one - technophobic appeals tend to have little impact on the zeitgeist, especially when there are massive economic incentives toward the adoption of a new technology. I have little doubt human-made art will find fertile ground, especially appealing to us boomers. But the younger generations, who have grown up in a world where machine learning can automate any and every form of media? I doubt it. By the way, I wouldn't be surprised if the basis for biological creativity is shallower in layers and algorithmically much simpler than something like Stable Diffusion. Us fleshy bags have to contend with the evolutionarily likely and these lossy, noisy neurons, after all. No risk of overfitting whatsoever unless we deliberately are out to plagiarize something... |
True, what is 'original' art anyway but a continuation / interpretation of prior art and experience of other sensory input. AI simply has access to vastly more prior art and recordings of everything than any human can ever experience in one life time.
The problem is, which is really just a capitalist problem, where to draw the line for copyright / plagiarism. It's up to the courts I guess, like Sony suing Tencent for Light of Motiram, Nintendo going after fan projects recreating their games / using their assets.
You can introduce 'noise' in AI recreations as well, not that that helps the debate one way or another.
Technophobic appeals are more for the loss of jobs, like the musicians in theaters. AI voices, AI actors, AI paralegals. Far more worrying is AI censorship, AI profiling, AI decision making. Who is responsible when 'the machine' did it. If one human makes a mistake, fire that person. If an AI program controlling all flights makes a mistake, err shit, all flights grounded until its fixed.
Evolution thrives on diversity to overcome adversity. What will happen when one AI rules everything? Or should we have many different AIs.
Actually the biggest threat of AI right now is it's immense power and water usage :/ Can AI become sustainable is the bigger question.
https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/
Generating images was by far the most energy- and carbon-intensive AI-based task. Generating 1,000 images with a powerful AI model, such as Stable Diffusion XL, is responsible for roughly as much carbon dioxide as driving the equivalent of 4.1 miles in an average gasoline-powered car.
Hmm I bet a human painting 1,000 images has a much bigger CO footprint lol. But it's a different process of course, far easier to generate 1,000 AI images than paint one painting.
The carbon emissions of writing and illustrating are lower for AI than for humans
https://www.nature.com/articles/s41598-024-54271-x
Yeah no doubt, like for like AI wins. Doesn't need to eat, sleep, etc.
But
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Each time a model is used, perhaps by an individual asking ChatGPT to summarize an email, the computing hardware that performs those operations consumes energy. Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search.
By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatt-hours (which would bump data centers up to fifth place on the global list, between Japan and Russia).
And an even bigger imminent threat is the AI bubble bursting, global recession incoming :(
Technophobic appeals might be right this time, albeit for the wrong reasons.
Can AI prevent the AI bubble bursting?
ChatGPT's take
AI might delay or soften a bubble burst if:
it continues to generate tangible value across industries,
capital becomes more rationally allocated, and
regulation encourages sustainable development.
But AI can’t prevent market psychology—fear and greed—from cycling. If expectations outrun reality, correction is inevitable.
Grok's conclusion
AI has the tools to detect and mitigate aspects of its own bubble—through forecasting, optimization, and proving tangible value—but it can't fully prevent a burst on its own. Bubbles are fundamentally driven by human psychology, speculative capital, and systemic flaws (e.g., unprofitable business models where training costs outpace revenue). A correction seems plausible in the near term, potentially in 2026-2028, but it would likely refine rather than destroy AI, leaving behind infrastructure for future growth. Ultimately, prevention relies more on balanced investments, regulation, and realistic expectations than on AI alone. If anything, over-reliance on AI without addressing these could make the burst worse.
Both agree it depends on humans being reasonable, we're doomed!