For those looking to deceive the eye, a new tool is at hand. On May 23, Adobe launched a beta version of Photoshop that uses generative artificial intelligence. The latest release has a function called “generative fill” that lets the user manipulate an image through text prompts. Adobe bills the new tool as a designer’s “creative co-pilot.”
A user can type directions into a text box, allowing them to alter, add, and remove elements in an image. The desktop beta version of the app is now available, with a generally available version set to roll out in the second half of 2023.
Photoshop is the first program that Adobe has “deeply integrated” with AI. Six weeks ago, the software giant unveiled Firefly, a set of generative AI models designed for its Creative Cloud suite.
These developments come at a time when fears are running high about the ability to spot manipulated images. On May 22, a fake photo of an explosion near the Pentagon circulated on social media, sowing confusion and even causing a momentary dip in the S&P 500 index.
Fake news, Twitter, and market jitters
The fake Pentagon image was shared by several verified Twitter accounts, including one called @BloombergFeed, which isn’t associated with Bloomberg News (the account has since been suspended). Wall Street news account @zerohedge, which has more than 1.6 million followers, tweeted and deleted the image, as did Russian state media outlet RT.
The image was debunked within an hour, with the Arlington County Fire Department tweeting that no explosion or incident had taken place. But netizens and investors had already been spooked.
The rapid spread of disinformation on Twitter, which scrapped its old verification system in March, showed the potential for fake images on the platform to wreak havoc.
Adobe is promoting Content Credentials as a solution to fake images
In November 2019, Adobe announced it was partnering with Twitter and the New York Times Co. to create the Content Authenticity Initiative. Its aim:“developing an industry standard for digital content attribution.”
Since then, Adobe has developed so-called Content Credentials, which it describes as “nutrition labels” attached to images. Each tag is intended to contain a tamper-proof record of an image’s provenance and history of changes. The feature is still in development.
✋ China goes a step further in regulating deepfakes
🪦 With AI, Bill Gates sees the end of Google Search and Amazon