OpenAI, — ChatGPT’s parent company and a leading corporation behind the advancement of artificial intelligence — released SORA, a truly groundbreaking new AI-powered platform capable of generating video clips up to one minute long from a user’s text input, on Feb. 15.
So far, SORA has not been released to the public; only to select individuals and entities that OpenAI hopes will find and troubleshoot vulnerabilities before its inevitable release to the public, which has many feeling excited, nervous or both.
AI-generated images from platforms like DALL-E have already raised alarms for those worried about misinformation, art theft and other malicious uses of generative AI, such as images of one’s likeness being used to sexually exploit them. (For example, AI-generated photos of Taylor Swift nude and in sexually overt poses or scenes quickly went viral before social media platforms took them down.)
Proper measurements should be taken by legislative officials to provide oversight and ensure the advancements of these technologies in order to mitigate harm and unpredictable damage.
As AI becomes more prevalent in media, it forces many creatives to adapt and either learn to utilize SORA (among other new technologies and innovations) or lag behind.
This is not the first time advancements in technology have shook up the world and forced people to adapt — take the invention of the automobile or air travel. A fear of advancement will not prevent friction of transition, but adequate preparation and legislative oversight at every stage can certainly make that transition smoother.
AI’s underlying problem isn’t its innovation; its problem is the ethics of its use. Ethics in AI does not have any clear solution or timeframe, but leading industries, like Microsoft, OpenAI and Google show no signs of slowing down anytime soon.