By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General - Sora Is Shutting Down?

Google is ironically the good guy when it comes to AI use cases in my opinion. Open AI should take a page from their book, and reallocate some of their Sora compute to stuff like this. 

https://www.nature.com/articles/s41586-025-10014-0

Last edited by sc94597 - 2 days ago

Around the Network
sc94597 said:

Google is ironically the good guy when it comes to AI use cases in my opinion. Open AI should take a page from their book, and reallocate some of their Sora compute to stuff like this. 

https://www.nature.com/articles/s41586-025-10014-0

Or better yet, no more further development of AI.



CaptainExplosion said:
sc94597 said:

Google is ironically the good guy when it comes to AI use cases in my opinion. Open AI should take a page from their book, and reallocate some of their Sora compute to stuff like this. 

https://www.nature.com/articles/s41586-025-10014-0

Or better yet, no more further development of AI.

Nah. "Reduced use of animals" for health-care experiments is a big win, in my opinion. Tons of suffering reduced on sentient mammalian and avian life is an ethical social progression. 



sc94597 said:
CaptainExplosion said:

Or better yet, no more further development of AI.

Nah. "Reduced use of animals" for health-care experiments is a big win, in my opinion. Tons of suffering induced on sentient mammalian and avian life is an ethical social progression. 

How sentient they are is still up in the air.

And you see the irony of saying AI is helping animals when data centres are using water and releasing harmful chemicals into the atmosphere?



CaptainExplosion said:
sc94597 said:

Google is ironically the good guy when it comes to AI use cases in my opinion. Open AI should take a page from their book, and reallocate some of their Sora compute to stuff like this. 

https://www.nature.com/articles/s41586-025-10014-0

Or better yet, no more further development of AI.

That's a completely unrealistic idea so repeating it over and over again is a waste of time. Aside from that it's also bad cause you're advocating for stuff like medical advancements to slow down thus advocating for people to continue suffering and dying from stuff like cancer for longer than needed. Or in this case for animals to suffer more. You should at least try approaching this subject with some nuance and advocate for regulation instead of nonsensical things that'll never happen. In general responding to objectively fantastic applications of the technology in this way is just absurd.



Around the Network
CaptainExplosion said:
sc94597 said:

Nah. "Reduced use of animals" for health-care experiments is a big win, in my opinion. Tons of suffering induced on sentient mammalian and avian life is an ethical social progression. 

How sentient they are is still up in the air.

And you see the irony of saying AI is helping animals when data centres are using water and releasing harmful chemicals into the atmosphere?

No it isn't. The overwhelming majority of mammalian and avian life is sentient - "being able to feel perceive, and experience sensations subjectively, such as pain, pleasure, and emotion." Anyone whose ethics go beyond "what is good for me and mine?" can see that. 



Norion said:
CaptainExplosion said:

Or better yet, no more further development of AI.

That's a completely unrealistic idea so repeating it over and over again is a waste of time. Aside from that it's also bad cause you're advocating for stuff like medical advancements to slow down thus advocating for people to continue suffering and dying from stuff like cancer for longer than needed. Or in this case for animals to suffer more. You should at least try approaching this subject with some nuance and advocate for regulation instead of nonsensical things that'll never happen. In general responding to objectively fantastic applications of the technology in this way is just absurd.

sc94597 said:
CaptainExplosion said:

How sentient they are is still up in the air.

And you see the irony of saying AI is helping animals when data centres are using water and releasing harmful chemicals into the atmosphere?

No it isn't. The overwhelming majority of mammalian and avian life is sentient - "being able to feel perceive, and experience sensations subjectively, such as pain, pleasure, and emotion." Anyone whose ethics go beyond "what is good for me and mine?" can see that. 

You missed what I said about data center emissions and water consumption. How is that helping animals?

And if animals are truly sentient, what are you gonna say is ok next, bestiality?



CaptainExplosion said:
sc94597 said:

No it isn't. The overwhelming majority of mammalian and avian life is sentient - "being able to feel perceive, and experience sensations subjectively, such as pain, pleasure, and emotion." Anyone whose ethics go beyond "what is good for me and mine?" can see that. 

You missed what I said about data center emissions and water consumption. How is that helping animals?

And if animals are truly sentient, what are you gonna say is ok next, bestiality?

No I didn't miss what you said. I just didn't let you derail the fact that human testing on animals is reduced. 

Animals (at least those with central nervous systems) feel pain and experience the world subjectively. You're seemingly mixing up sentient, which is the capacity to experience the world subjectively, with sapience - which is having human-like self-awareness and judgement. The bestiality comment is fucking ridiculous and you know it. 

As for the data center issue, 1. The bulk of data-centers are for cloud and big data services, not AI (that might change in the future) 2. Most of the ecological consequences of these data-centers are up-front because these systems are pretty closed once built 3. Advancements in computing like photonics, other analog computing technologies, and more efficient algorithms will make this a short-term rather than a medium-term issue. Meanwhile the reduction and eventual elimination of animal testing is a long-term benefit. 



sc94597 said:
CaptainExplosion said:

You missed what I said about data center emissions and water consumption. How is that helping animals?

And if animals are truly sentient, what are you gonna say is ok next, bestiality?

No I didn't miss what you said. I just didn't let you derail the fact that human testing on animals is reduced. 

Animals (at least those with central nervous systems) feel pain and experience the world subjectively. You're seemingly mixing up sentient, which is the capacity to experience the world subjectively, with sapience - which is having human-like self-awareness and judgement. The bestiality comment is fucking ridiculous and you know it. 

As for the data center issue, 1. The bulk of data-centers are for cloud and big data services, not AI (that might change in the future) 2. Most of the ecological consequences of these data-centers are up-front because these systems are pretty closed once built 3. Advancements in computing like photonics, other analog computing technologies, and more efficient algorithms will make this a short-term rather than a medium-term issue. Meanwhile the reduction and eventual elimination of animal testing is a long-term benefit. 

Doesn't take away from the problems of AI NOW. Elon and other AI bro boneheads like him have been poisoning small towns with their data centres, figuratively AND literally.



CaptainExplosion said:

Doesn't take away from the problems of AI NOW. Elon and other AI bro boneheads like him have been poisoning small towns with their data centres, figuratively AND literally.

So in the example you provided, the area affected is in a state where the population has historically voted to have no to few environmental regulations, and already had issues with pollution before the data centers came in, since well, it industrialized. The majority (white) population has voted this way because they know these environmental harms will mostly affect working class, non-white Americans in segregated areas. But the production and harms have pre-dated AI by centuries, and the solution must encompass more than just addressing AI. It's a political and social problem. 

"Construction underway for billionaire Elon Musk’s xAI supercomputer is in a predominantly Black Memphis neighborhood already affected negatively by environmental pollution. (Photo: Karen Pulfer Focht for Tennessee Lookout)"

           In other jurisdictions, where environment regulations are far stricter typically these investments are (often required to be) coupled with carbon-neutral or carbon-negative initiatives (1,2.)

             And these competitors are out-competing Elon, because they have what his company doesn't have - high-quality talent where as he has indentured  servants who work for him for their H1B visas. 

             Since we were talking about an AI produced by Google that is helping both humans and animals, let's also note that they have been pushing for efficient compute for a while. For example, it was only a few days ago that they released a paper on how to reduce memory consumption of current systems by about 6 times, something they could have guarded and kept as a competitive edge. 

There is also the matter that "AI is making Renewable Energy smarter" as noted by the Reneweable Energy Institute. 

"AI enhances every stage of the value chain, optimising forecasting of solar and wind output, enabling predictive maintenance to minimise downtime and enabling smart grid management to balance supply and demand efficiently"

Edit: Not sure why this post got screwed up in its formatting, but you can quote it to read it. 

Last edited by sc94597 - 2 days ago