This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.
Our Opt-Out Tuesday series was created to help you remove yourself from invasive people search sites. The opt-out process isn’t always easy, so we put detailed instructions for the biggest culprits out there. The list has been growing by the week and now includes over 30 entries.
Now let’s talk Big Tech companies. When you create an account, there are several things they can do with your information. They can sell it to third parties, use it to target you with ads or “improve products and services.” That last one’s a classic.
Adobe has been offering cloud services since 2013, giving subscribers access to its wide range of software and mobile applications. While convenient, the company is using your files to Boost its artificial intelligence. The good news is that you can put a stop to it.
When you use Creative Cloud or Document Cloud, you can save images, audio, videos, documents and other files to the cloud. This provides a secure storage method and makes accessing your work from different locations and across Adobe’s suite of products easy.
There’s a caveat, however. Storing your work in the cloud means Adobe has access to it and can do what it pleases, as outlined by its terms of service:
“Our automated systems may analyze your Content and Creative Cloud Customer Fonts … using techniques such as machine learning in order to Boost our Services and Software and the user experience.”
Dig deeper, and you’ll find an FAQ page explaining that Adobe uses machine learning to analyze your content to develop and Boost product services. What a surprise.
According to Adobe, this is done for efficiency and creativity. The company cites an example: you’ll be able to organize and edit your images more quickly and accurately. And another: automatically enhancing certain parts of your PDFs to Boost readability.
Don’t like how this sounds? You have a couple of options:
RELATED: Storage 101: How to keep your USB drives and external hard drives safe for years to come
The Chinese spy balloon: What we can do next
One tool to keep your online activity PRIVATE and opt out of intrusive cookies
Despite Apple’s best efforts, Mac malware does exist, we describe some cases below. However, before you panic, Mac malware and viruses are very rarely found “in the wild”.
From time to time you will hear of big profile trojans, malware, and ransomware that is targeting the Windows world, very rarely is this a threat to Macs. For example, the worldwide WannaCry/WannaCrypt ransomware attack that hit back in May 2017 was only targeting Windows machines and therefore no threat to Macs.
Luckily Apple has various measures in place to guard against such threats. For example, macOS shouldn’t allow the installation of third-party software unless it’s from the App Store or identified developers. You can check these settings in macOS Ventura’s System Settings > Privacy & Security and scroll to the Security section, or, if you are using Monterey or older, go to System Preferences > Security & Privacy > General. You can specify whether only apps from the Mac App Store can be installed, or if you are happy to allow apps from identified developers too. If you were to install something from an unknown developer Apple would warn you to check it’s authenticity.
In addition Apple has its own built-in anti-malware tool. Apple has all the malware definitions in its XProtect file which sits on your Mac, and every time you download a new application it checks that none of those definitions are present. This is part of Apple’s Gatekeeper software that blocks apps created by malware developers and verifies that apps haven’t been tampered with. For more information read: how Apple protects you from malware. We also discuss whether Macs need antivirus software separately.
In accurate years malware on the Mac actually decreased, however, as you will see if you read on, Macs are not completely safe from attacks. To stay safe, we recommend you read our best Mac security tips and our round up of the best Mac antivirus apps, in which we highlight Intego as our top pick.
Another thing to note is that Apple’s own M-series chips that it has been using in Macs since November 2020 are considered more secure than Intel processors. However, malware, dubbed Silver Sparrow, was found on the M1 Mac soon after launch so even Apple’s own chips are not immune.
Curious to know what Mac viruses are out there? In this article we will endeavour to supply you a complete list.
PROMOTION
Antivirus Deal: Intego Mac Premium Bundle
Get Intego’s Mac Premium Bundle X9 with antivirus, firewall, backup and system performance tools for just $29.99 (down from $84.99) for the first year.
When: October 2022. What: Provide a backdoor onto the target system. Targeting a vulnerability in a 3rd party Unix tool. Who: Very specific target as pkexec is rarely found on Macs.
When: August 2022. What: Malware disguised as job postings. Who: Targeting Coinbase users and Crypto.com.
When: July 2022. What: VPN app with two malicious binaries: ‘softwareupdated’ and ‘covid’.
When: July 2022. What: Spyware downloader that uses public cloud storage services such as Dropbox, Yandex Disk and pCloud. Exploited CVE-2020-9934 which was closed macOS Catalina 10.5.6 in August 2020.
When: May 2022. What: Supply chain attack with screencapture, keylogging, remote file retrieval. Who: Targeted the Rust development community.
When: May 2022. What: Hoping that users might mistype and download the malware instead of legitimate pykafka. Who: Targeting PyPI registry.
When: April 2022. What: Distributed via a Disk Image masquerading as a collection of Bitget Apps. Who: Targeting gambling websites.
When: March 2022. What: Distributed as a CorelDraw file that was hosted on a Google Drive. Who: Targeting protest groups in Asia.
When: January 2022. What: Included code for searching and writing files, dumping the keychain, running a remote desktop and more. Read more here: Patched Mac malware sheds light on scary backdoor for hackers. Who: Targeting supporters of democracy in Hong Kong.
When: January 2022. What: Chrome browser extension that could steal information, hijack the search engine queries, and serve adware.
When: November 2021. What: Keylogger, screen capturer, screen capturer and backdoor. Who: Targetting supporters of pro-democracy activism in Hong Kong.
When: September 2021. What: Trojan that spread disguised as iTerm2 app. Microsoft’s Remote Desktop for Mac was also trojanized with the same malware. Who: Spread via sponsored web links and links in the Baidu search engine.
When: May 2021 (originally from August 2020). What: Used a zero-day vulnerability in Safari. See: macOS 11.4 patches flaws exploited by XCSSET malware. Who: Aimed at Chinese gambling sites.
When: July 2021. What: The XLoader malware was one of the most prevalent pieces of Windows malware to have been confirmed to run on macOS. XLoader is a variant of Formbook, a program used to steal login credentials, record keystrokes, and download and execute files.
When: July 2021. What: New multi-platform version of Milum Trojan embedded in a Python file. Who: Targeting Middle East activists.
When: March 2021. What: A Trojan hidden in Xcode projects in GitHub had the potential to spread among the Macs of iOS developers. Once installed a malicious script runs that installs an “EggShell backdoor”. Once open the Mac’s microphone, camera and keyboard can be hyjacked and files can be send to the attacker. The malware was found in a ripped version of TabBarInteraction. Read more here: New Mac malware targets iOS developers. Who: Attack on iOS developers using Apple’s Xcode.
When: February 2021. What: Adload dropper that was notarized by Apple and used a Gatekeeper bypass.
When: February 2021. What: Based on Pirri and known as GoSearch22 infected Macs would see unwanted adverts. More information here: M1 Macs face first recorded malware.
When: January 2021. What: Malware targeting Macs equipped with the M1 processor. Used the macOS Installer Javascript API to execute commands. According to Malwarebytes, by February 2021 Silver Sparrow had already infected 29,139 macOS systems in 153 countries, most of the infected Macs being in the US, UK, Canada, France and Germany. More details here: What you need to know about Silver Sparrow Mac malware.
When: January 2021 (but first detected in 2015). What: Cryptocurrency miner distributed via pirated copies of popular apps including League of Legends and Microsoft Office.
When: January 2021. What: Remote Access Trojan targeting multiple platforms including macOS. Who: Targeting cryptocurrency users.
When: October 2020. What: GravityRAT was an infamous Trojan on Windows, which, among other things, had been used in attacks on the military. It arrived on Macs in 2020. The GravityRAT Trojan can upload Office files, take automatic screenshots and record keyboard logs. GravityRAT uses stolen developer certificates to bypass Gatekeeper and trick users into installing legitimate software. The Trojan is hidden in copies of various legitimate programs developed with .net, Python and Electron. We have more information about GravityRAT on the Mac here.
When: August 2020. What: Mac malware spread through Xcode projects posted on Github. The malware – a family of worms known as XCSSET – exploited vulnerabilities in Webkit and Data Vault. Would seek to access information via the Safari browser, including login details for Apple, Google, Paypal and Yandex services. Other types of information collected includes notes and messages sent via Skype, Telegram, QQ and Wechat. More information here.
When: June 2020. What: ThiefQuest, which we discuss here: Mac ransomware ThiefQuest/EvilQuest could encrypt your Mac, was Ransomware spreading on the Mac via pirated software found on a Russian torrent forum. It was initially thought to be Mac ransomware – the first such case since 2017 – except that it didn’t act like ransomware: it encrypted files but there was no way to prove you had paid a ransom and no way to subsequently unencrypted files. It turned out that rather than the purpose of ThiefQuest being to extort a ransom, it was actually trying to obtain the data. Known as ‘Wiper’ malware this was the first of its kind on the Mac.
When: July 2019. What: These were described by Intego as “backdoor malware” with capabilites such as keystoke logging and screenshot taking. They were a pair of Firefox zero-days that targeted those using cryptocurrancies. They also bypassed Gatekeeper. backdoor” malware
When: June 2019. What: This was a cryptocurrency miner that was distributed via a cracked installer for Ableton Live. The cryptocurrency mining software would attempt to use your Mac’s processing power to make money.
When: June 2019. What: This malware attempted to add tabs to Safari. It was also digitally signed with a registered Apple Developer ID.
When: May 2019. What: It exploited a zero-day vulnerability in Gatekeeper to install malware. The “MacOS X GateKeeper Bypass” vulnerability had been reported to Apple that February, and was disclosed by the person who discovered it on 24 May 2019 because Apple had failed to fix the vulnerability within 90 days. Who: OSX/Linker tried to exploit this vulnerability, but it was never really “in the wild”.
When: January 2019. What: The CookieMiner malware could steal a users password and login information for their cyberwallets from Chrome, obtain browser authentication cookies associated with cryptocurrency exchanges, and even access iTunes backups containing text messages in order to piece together the information required to bypass two-factor authentication and gain access to the victim’s cryptocurrency wallet and steal their cryptocurrency. Unit 42, the security researchers who identified it, suggest that Mac users should clear their browser caches after logging in to financial accounts. Since it’s connected to Chrome we also recommend that Mac users choose a different browser. Find out more about CookieMiner Mac malware here.
When: 2018. What: OSX.SearchAwesome was a kind of adware that targets macOS systems and could intercept encrypted web traffic to inject ads.
When: August 2018. What: Mac Auto Fixer was a PiP (Potentially Unwanted Program), which piggybacks on to your system via bundles of other software. Find out more about it, and how to get rid of it, in What is Mac Auto Fixer?
When: June 2018. What: This Mac malware was found on several websites, including a comic-book-download site in June 2019. It even showed up in Google search results. CrescentCore was disguised as a DMG file of the Adobe Flash Player installer. Before running it would check to see if it inside a virtual machine and would looks for antivirus tools. If the machine was unprotected it would install either a file called LaunchAgent, an app called Advanced Mac Cleaner, or a Safari extension. CrescentCore was able to bypass Apple’s Gatekeeper because it had a signed developer certificate assigned by Apple. That signature was eventually revoked by Apple. But it shows that although Gatekeeper should stop malware getting through, it can be done. Again, we note that Adobe ended support for Adobe Flash on 31 December 2020, so this should mean fewer cases of malware being disguised as the Flash Player.
When: May 2018. What: Cryptominer app. Infected users noticed their fans spinning particularly fast and their Macs running hotter than usual, an indication that a background process was hogging resources.
When: February 2018. What: Mac adware that infected Macs via a fake Adobe Flash Player installer. Intego identifed it as a new variant of the OSX/Shlayer Malware, while it may also be refered to as Crossrider. In the course of installation, a fake Flash Player installer dumps a copy of Advanced Mac Cleaner which tells you in Siri’s voice that it has found problems with your system. Even after removing Advanced Mac Cleaner and removing the various components of Crossrider, Safari’s homepage setting is still locked to a Crossrider-related domain, and cannot be changed. Since 31 December 2020 Flash Player has been discontinued by Adobe and it no longer supported, so you can be sure that if you see anything telling you to install Flash Player please ignore it. You can read more about this incident here.
When: January 2018. What: MaMi malware routes all the traffic through malicious servers and intercepts sensitive information. The program installs a new root certificate to intercept encrypted communications. It can also take screenshots, generate mouse events, execute commands, and download and upload files.
When: January 2018. What: Apple confirmed it was one of a number of tech companies affected, highlighting that: “These issues apply to all modern processors and affect nearly all computing devices and operating systems.” The Meltdown and Spectre bugs could allow hackers to steal data. Meltdown would involve a “rogue data cache load” and can enable a user process to read kernel memory, according to Apple’s brief on the subject. Spectre could be either a “bounds check bypass,” or “branch target injection” according to Apple. It could potentially make items in kernel memory available to user processes. They can be potentially exploited in JavaScript running in a web browser, according to Apple. Apple issued patches to mitigate the Meltdown flaw, despite saying that there is no evidence that either vulnerability had been exploited. More here: Meltdown and Spectre CPU flaws: How to protect your Mac and iOS devices.
When: April 2017. What: macOS Trojan horse appeared to be able to bypass Apple’s protections and could hijack all traffic entering and leaving a Mac without a user’s knowledge – even traffic on SSL-TLS encrypted connections. OSX/Dok was even signed with a valid developer certificate (authenticated by Apple) according to CheckPoint’s blog post. It is likely that the hackers accessed a legitimate developers’ account and used that certificate. Because the malware had a certificate, macOS’s Gatekeeper would have recognized the app as legitimate, and therefore not prevented its execution. Apple revoked that developer certificate and updated XProtect. OSX/Dok was targeting OS X users via an email phishing campaign. The best way to avoid falling foul to such an attempts is not to respond to emails that require you to enter a password or install anything. More here.
When: February 2017. What: X-agent malware was capable of stealing passwords, taking screenshots and grabbing iPhone backups stored on your Mac. Who: The malware apparently targeted members of the Ukrainian military and was thought to be the work of the APT28 cybercrime group, according to Bitdefender.
When: February 2017. What: MacDownloader software found in a fake update to Adobe Flash. When the installer was run users would get an alert claiming that adware was detected. When asked to click to “remove” the adware the MacDownloader malware would attempt to transmit data including the users Keychain (usernames, passwords, PINs, credit card numbers) to a remote server. Who: The MacDownloader malware is thought to have been created by Iranian hackers and was specifically targetted at the US defence industry. It was located on a fake site designed to target the US defence industry.
When: February 2017. What: PC users have had to contend with macro viruses for a long time. Applications, such as Microsoft Office, Excel, and Powerpoint allow macro programs to be embedded in documents. When these documents are opened the macros are run automatically which can cause problems. Mac versions of these programs haven’t had an issue with malware concealed in macros because since when Apple released Office for Mac 2008 it removed macro support. However, the 2011 version of Office reintroduced macros, and in February 2017 there was malware discovered in a Word macro within a Word doc about Trump. If the file is opened with macros enabled (which doesn’t happen by default), it will attempt to run python code that could have theoretically perform functions such as keyloggers and taking screenshots. It could even access a webcam. The chance of you being infected in this way is very small, unless you have received and opened the file referred to (which would surprise us), but the point is that Mac users have been targeted in this way.
When: January 2017. What: Fruitfly malware could capture screenshots and webcam images, as well as looking for information about the devices connected to the same network – and then connects to them. Malwarebytes claimed the malware could have been circulating since OS X Yosemite was released in 2014.
When: April 2016. What: OSX/Pirrit was apparently hidden in cracked versions of Microsoft Office or Adobe Photoshop found online. It would gain root privileges and create a new account in order to install more software, according to Cybereason researcher Amit Serper in this report.
When: November 2016. What: Mac-targeted denial-of-service attacks originating from a fake tech support website. There were two versions of the attack depending on your version of macOS. Either Mail was hijacked and forced to create vast numbers of draft emails, or iTunes was forced to open multiple times. Either way, the end goal is to overload system memory and force a shutdown or system freeze.
When: March 2016. What: KeRanger was ransomware (now extinct). For a long time ransomware was a problem that Mac owners didn’t have to worry about, but the first ever piece of Mac ransomware, KeRanger, was distributed along with a version of a piece of legitimate software: the Transmission torrent client. Transmission was updated to remove the malware, and Apple revoked the GateKeeper signature and updated its XProtect system, but not before a number of unlucky users got stung. We discuss how to remove Ransomware here.
When: February 2014. What: The problem stemmed from Apple’s implementation of a basic encryption feature that shields data from snooping. Apple’s validation of SSL encryption had a coding error that bypassed a key validation step in the web protocol for secure communications. There was an extra Goto command that hadn’t been closed properly in the code that validated SSL certificates, and as a result, communications sent over unsecured Wi-Fi hotspots could be intercepted and read while unencrypted. Apple quickly issued an update to iOS 7, but took longer to issued an update for Mac OS X, despite Apple confirming that the same SSL/TSL security flaw was also present in OS X. Who: In order for this type of attack to be possible, the attacker would have to be on the same public network. Read more about the iPad and iPhone security flaw here.
When: October 2011. What: OSX/Tsnunami.A was a new variant of Linux/Tsunami, a malicious piece of software that commandeers your computer and uses its network connection to attack other websites. More information here.
When: September 2011. What: Posing as a Chinese-language PDF, the nasty piece of software installs backdoor access to the computer when a user opens the document. More here.
When: September 2011. What: Flashback is thought to have been created by the same people behind the MacDefender attack and could use an unpatched Java vulnerability to install itself. Read more here: What you need to know about the Flashback trojan. Who: Apparently more than 500,000 Macs were infected by April 2012.
When: May 2011. What: Trojan Horse phishing scam that purported to be a virus-scanning application. Was spread via search engine optimization (SEO) poisoning.
When: February 2011. What: More of a proof-of-concept, but a criminal could find a way to get a Mac user to install it and gain remote control of the hacked machine. BlackHole was a variant of a Windows Trojan called darkComet. More information here: Hacker writes easy-to-use Mac Trojan.
For more information about how Apple protects your Mac from security vulnerabilities and malware read: Do Macs need antivirus software.
For anyone who has been living under a rock, ChatGPT is a ChatBot created by OpenAI that seemingly provides lifelike answers to input requests. This includes writing code, generating backlinks, drafting blogs, or explaining quantum physics. The attention it has been drawing is immense and the bot now has over 100 million users. OpenAI has now rolled out a subscription plan that is accessible only to those in the US.
The more I read and saw, the more curious I got. I’ve never seen any AI technology evolve so rapidly and garner so much attention. But along with it came feedback and information about how the bot was providing incorrect, biased, and sometimes harmful answers.As the owner of a Digital Marketing Agency, I wondered what impact ChatGPT and Generative AI as a whole would have on the different functions within my organization and the larger ecosystem.
Generative AI became popular in the last decade with the advent of deep learning architecture such as GANS (Generative Adversarial Networks), VAE (Variational Autoencoders), and Transformer. These models were initially used to create realistic human images. The Transformer model, which could generate text and write software code, was a game changer as it was scalable.
With ChatGPT and DALL.E2, we have now come to a head where with better algorithms, larger datasets, and better models, these generative AI tools can create more realistic images and write long paragraphs of coherent text. I believe we have reached a tipping point, and it would be foolish to think that Generative AI will not have an impact on Digital Marketing.Gartner expects that by 2025, 30% of outbound marketing messages from large organizations will be synthetically generated, up from less than 2% in 2022. I personally believe that this number will be higher. Organizations need to get a deep understanding of what these tools are capable of, what they can and cannot be used for, and how quickly they can adapt to the change while staying accountable for their output.
ChatGPT
We are not new to content writing tools. They have been around for several years. Simple tools such as Grammarly and Hemingway help content teams with spelling and grammar, while more advanced tools such as Copy.ai, Rytr, Frase, INK, Jasper, etc., generate copy for emails, blogs, Ads, websites, and social media posts.
There have always been concerns about using these tools. While the pros revolved around scalability (speed of churning out content), writing efficiency, and cost-effectiveness, the cons included plagiarism concerns, content quality, and the lack of creativity or originality. But the biggest risk for AI-generated content was Google’s algorithm update, which was rewritten to devalue the content written by AI tools. Google was clear that original content that adds value to the searcher was more important than writing for search engines.
However, ChatGPT has distinguished itself by providing one answer to your question or request. Think about consolidating all the responses on Google and providing one answer. It gleans through millions of information and provides an almost human-like response.
I asked my content team to experiment with ChatGPT. To feed it queries, commands, and requests and have a chat with it based on our service lines and the work we have done so far. All of them found the tool easy to use, and the results were mostly accurate. However, they also found some results to be very verbose, lacking in specifics (no dates, numbers, data, citations, or quotes), and repetitive despite reframing the question. It did put out incorrect information as well.
DALL.E2
Almost all designers will tell you that searching for the right images takes more time than creating the design itself. If Generative AI tools can provide custom-made images quickly, then designers can use the time they save for more creative thinking and high-end design work.
OpenAI’s ‘DALL.E2 Explained’ video on YouTube highlights how certain tasks (like replacing a dog with a cat without changing anything else in the background) can be done in seconds. It chooses the precise cat image (from among hundreds) which fits perfectly with the background in one-tenth the time it would take a designer to find the same image. Does it then make business sense for a designer to engage in this time-consuming, repetitive task? I would think not. Instead of going through reams and reams of stock images, it would be more productive for a designer to take the output DALL.E2 gives and refine it further.
My design team found that the tool could generate realistic images, but options were limited. Images of tech, such as crypto and cloud, were inadequate. Also, the images can only be downloaded as a .png file at the moment, which makes editing more difficult.
Will Generative AI Displace Human Workers?
The debate about technology replacing humans has been going on for years. A study by McKinsey in 2017 stated that in about 60% of occupations, around one-third of the tasks or activities could be automated, which means there would be a significant change for workplaces and workers. But there are always two sides to a story. A report by the World Economic Forum in 2020 states that by the end of 2025, technological progress (including AI) will create 12 million jobs.
On the surface, it looks like a threat. Questions like what would happen to content writing, writers, and designers can be heard even from my team. But I don’t believe it to be a threat. We have been using Jasper for over two years, and it has not replaced anyone. Companies that opt not to utilize content writers and instead rely only on tools to generate their content run the risk of publishing duplicate content. The content itself would be boring and impersonal while providing no value to the reader.
ChatGPT cannot solve everybody’s content problems. DALL.E2 may produce images faster, but it still requires finetuning, branding, and vetting to meet company standards. All my teams are united in saying that every output generated needed a once-over, a quick edit, some tweaking, or fact-checking.
It would be good to remember that AI tools are based on what came before. They can comprehend and generate, but they cannot create.
So how can you use ChatGPT and DALL.E2 in your organization?
As an organization, start by listing all your services. For each service, think about what ChatGPT or DALL.E2 can do and which services they cannot add value to. There might be services where they can help by completing almost 80-85% of the job. For example, a listicle blog written specifically for SEO purposes or a quick image for World Environment Day. For other services, they might be able to contribute only 5-10%. This could include value proposition documents or thought leadership articles.
With the feedback I received from my teams, I believe ChatGPT and DALL.E2 can be used to help with monotonous, repetitive tasks. The writer or designer can then utilize the time saved by adding more original and creative nuances to the piece or image.
My content team found the tool useful for quick research and answers, brainstorming ideas, and generating new ideas when they got stuck or faced writer’s block. High volume low impact work like SEO backlinks and Quora answers were quick to generate. Blog writers took the top 5 questions from answerthepublic.com for their subject and fed them into ChatGPT. They now had enough information to write a whole blog without the need to go through all the answers on Google. After adding data, insights, and anecdotes, the blog was personalized and ready.
The design team could generate images quickly where a combination of 2-3 elements was needed. They believed that the tool could be used for generic social media posts. Any time saved by the designer was then used to enhance the final output. The creative control still remained with the designer.
My thought leadership team asked ChatGPT for a weekly and monthly social media plan for the thought leaders they work with and found many new and useful ideas.
While it is tempting to see how we can use Generative AI to help our clients, start by using it to help your teams. How can it be used to benefit them? Where can you save them time so they can focus on more creative and intellectual work? How can Generative AI take the monotony out of creativity?
Using Generative AI Responsibly
The cons of Generative AI are evident. For every article admiring its capabilities, there are many that call out its dangers. My team agrees that some of the answers were opinionated, biased, and incorrect. Responsible AI is at the top of every organization’s list of things to do. From Google to Adobe, everyone is working on making AI more secure, safe, transparent, and fair.
In the meantime, as digital marketers, we need to acknowledge that while Generative AI tools might be responsible for the output, we are still accountable for what we publish and share. We must continue fact-checking, tweaking, refining, monitoring, and editing our work. In the long run, when we start using Generative AI for commercial purposes, we need to create standard operating procedures, policies, and fail-safes, and train employees on when and where Generative AI can and cannot be used.
Conclusion
Generative AI will change the way we think about creative work. It extends beyond Content and Design into Sales, Operations, and HR. If we do not accept it, it will be our loss.
In the end, using generative AI tools is like asking for advice. You listen to the advice and appreciate the input, but you also use your own judgment, experience, and common sense to decide what works best for you. It is like having your own personal assistant who can take away the tedious lowbrow work, leaving you to do more meaningful and creative work.
AI does not, at this point, have all the answers, and humans need to use their own intelligence and expertise to decide what works and what does not for their business. If Tony Stark can occasionally ask his most advanced and trusted AI machine Jarvis to shut up, so can we!
Innovations like ChatGPT and DALL·E 2 highlight the incredible advances that have taken place with AI, causing professionals in countless fields to wonder whether or not such innovations mean the end of thought leadership or if they should instead focus on the opportunities presented by such tools. Even more recently, PVC writers have detailed why we need these AI tools as well as how they can be turned into unexpected services.
What do filmmakers and other creative professionals really think about these developments though? What are the top concerns, questions and viewpoints surrounding the accurate surge of available AI generative technologies that have recently hit the open market? Should we be worried or simply embrace the technology and forge ahead and let the bodies fall in the wake?
Below is how various PVC writers explored those answers in a conversation took shape over email. You can keep the discussion going in the comments section or on Twitter.
I’m definitely not unbiased as I’m currently engaging with as much of it on a user level as I can get my hands on (and have time to experiment with) and sort out the useful from the useless noise, so I can share my findings with the ProVideo community.
But with that said, I do see some lines being crossed where there may be legitimate concerns that producers and editors will have to keep in mind as we forge ahead and not paint ourselves into a corner – either legally or ethically.
Sure, most of the tools available out there are just testing the waters – especially with the AI image and animation generators. Some are getting really good (except for too many fingers and huge breasts) but when it gets indistinguishable from reality, we may see some pushback.
So the question arises that people generating AI images IN THE STYLE OF [noted artist] or PHOTOGRAPHED BY [noted photographer] if they are in fact infringing on those artists’ copyrights/styles or simply mimicking published works?
It is already being addressed in the legal system in a few lawsuits against certain AI tool developers that will eventually shake out how exactly their tools gather diffusion data it creates (it’s not just copy/paste) so that will either settle the direct copyright infringement argument against artists, or it will be a nail in the coffin for many developers and forbid further access to available online libraries.
The next identifiable technology that raises potential concern IMO are the AI tools that will regenerate facial imagery in film/video for the purpose of dubbing and ratings controls for possible misuse and misinformation.
On that note, I’ve mentioned ElevenLabs in my last article as a highly advanced TTS (Text To Speech) generator that not only allows you to customize and modify voices and speech patterns studying scripted text with astounding realism, but also lets you sample ANY recorded voice and then generate new voice recordings with your text inputs. For example, you could potentially used any A-list celebrity to say whatever marketing blurb you want in a VO or make a politician actually tell the truth (IT COULD HAPPEN!).
But if you could combine those last two technologies together, then we have a potential for a flood of misuse.
I’ve been actively using AI for a feature documentary I’ve been working on the past few years, and it’s made a huge difference on the 1100+ archival images I’ve retouched and enhanced, so I totally see the benefits for filmmakers already. It does add a lot of value to the finished piece and I’m seeing much cleaner productions in high-end feature docs these days.
As recently demonstrated, some powerful tools and (rather complex) workflows are being developed specifically for video & film, to benefit on-screen dubbing and translations without the need for subtitles. It’s only a matter of time before these tools are ready and available for use by the general public.
The crazy stuff I do at work… we use AI change up how films are made! @Flawlessai
This would be why I don’t post on here much these days.#Filmmaker #filmmaking #ArtificialIntelligence pic.twitter.com/xYmLAO6MaB— Sneaky Zebra (@sneakyzebra) January 23, 2023
As the saying goes – with great power comes great responsibility, and sadly, I think that may not end well for many developers who can’t control the who/where/how the end users utilize these amazing technologies.
I am not sure we will see a sudden shift in the production process regarding AI and documentary filmmaking. There is something about being on location with a camera in hand, finding the emotional thread, and framing up to tell a good story. It is nearly impossible to replace the person holding the camera or directing the scene. I think the ability of a director or photographer to light a scene, light multi-camera interviews, and be with a subject through times of stress is irreplaceable.
Yet, AI can easily slip into the pre-production and post-production process for documentary filmmaking. For example, I already use Rev.com for its automatic transcription of interviews and captions. Any technology to make the process of collaborating and increasing the speed of the editing process will run through the post-production work like wildfire. I can remember when we paid production assistants to log reality tv footage. Not only did the transcription look tedious, but it was also expensive to pay for throughout the shoot. Any opportunity to save a production company money will be used.
Then we get to the type of documentary filmmaking that may require the recreation of scenes to tell the story of something that happened sometime before the documentary shoot. I could see documentary producers and editors turn to whatever AI tool to recreate a setting or scenes or even an influential person’s voice. The legal implications are profound, though, and I can see a waterfall of new laws giving notable people intellectual property to a family member’s former image and voice no matter how long ago they passed or at the very least 100 years of control of that image and voice. Whenever there is money to be made from a person’s image or voice, there will be bad actors and those who ask for forgiveness instead of permission, but I bet the legal system will eventually catch up and protect those who want it.
The rights issues are extremely knotty (I’ve recently written about this). On one hand, the extant claims that a trained AI contains “copies of images” are factually incorrect. The trained state of an AI such as Stable Diffusion, which is at the centre of accurate legal action, is represented by something like the weights of interconnections in a neural networks, which is not image data. In fact, it’s notoriously difficult to interpret the internal state of a trained AI. Doing that is a major research topic, and our lack of understanding is why, for instance, it’s hard to show why an AI made a certain decision.
It could reasonably be said that the trained state of the AI contains something of the essence of an artist’s work and the artist might reasonably have rights in whatever that essence is. Worse, once an AI becomes capable of convincingly duplicating the style of an artist, probably the AI encompasses a bit more than just the essence of that artist’s work, and our inability to be specific about what that essence really is doesn’t change the fact that the artist really should have rights in it. What makes this really hard is that most jurisdictions do not allow people to copyright a style of artwork, so if a human artist learns how to duplicate someone else’s style, so long as they’re upfront about what they’re doing, that’s fine. What rubs people the wrong way is doing it with a machine which can easily learn to duplicate anyone’s work, or everyone’s work, and which can then flood the market with images in that style which might realistically begin to affect the original artist’s work.
In a wider sense this interacts with the broad issues of employment in general falling off in the face of AI, which is a society-level issue that needs to be addressed. Less skilled work might go first, although perhaps not – the AI can cut a show, but it can’t repair the burst water main without more robotics than we currently have. One big issue coming up, which probably doesn’t even need AI, is self-driving vehicles. Driving is a massive employer. No plans have been made for the mass unemployment that’s going to cause. Reasonable responses might include universal basic income but that’s going to require some quite big thinking economically, and the idea that only certain, hard-to-automate professions have to get up and go to work in the morning is not likely to lead to a contented society.
This is just one of a lot of issues workers might have with AI and so the accurate legal action might be seen as an early skirmish in what could be a quite significant war. I think Brian’s right about this not creating sudden shifts in most areas of production. To some extent the film and TV industry already does a lot of things it doesn’t really need to do, such as shooting things on 65mm negative. People do these things because it tickles them. It’s art. That’s not to say there might not likely be pressures to use more efficient techniques when they are available, as has been the case with photochemical film, and that will create another tension (as if there aren’t already a lot) between “show” and “business”. As a species we tend to be blindsided by this sort of thing more than we really should be. We tend to assume things won’t change. Things change.
I do think that certain types of AI information might end up being used to guide decision-making. For instance, it’s quite plausible to imagine NLE software gaining analysis tools which might create the same sort of results that test screenings would. Whether that’s good or not depends how we use this stuff. Smart application of it might be great. Allowing it to become a slave driver might be a disaster, and I think we can all imagine that latter circumstance arising as producers get nervous.
While AI has a lot to offer, and will cause a great deal of change in our field and across society, I don’t think it’ll cause broad, sweeping changes just yet. Artificial Intelligence has been expected to be the next big thing for decades now, and (finally!) some accurate breakthroughs are starting to have a more obvious impact. Yet, though ChatGPT, Stable Diffusion, Dalle and Midjourney can be very impressive, they can also fail badly.
ChatGPT seems really smart, but if you ask it about a specialist subject that you know well, it’s likely to come up short. What’s worse than ChatGPT not knowing the answer? Failing to admit it, but instead guessing wrong while sounding confident. Just for fun, I asked it “Who wrote Final Cut Pro Efficient Editing” because that’s the modern equivalent of Googling yourself, right? It’s now told me that both Jeff Greenberg and Michael Wohl wrote the book I wrote in 2020, and I’m not as impressed as I once was.
Don’t get me wrong: if you’re looking for a surface level answer, or something that’s been heavily discussed online, you can get lucky. It can certainly write the script for a very short, cheesy film. (Here’s one it wrote: https://vimeo.com/795582404/b948634f34.) Lazy students are going to love it, but it remains to be seen if it’s really going to change the way we write. My suspicion is that it’ll be used for a lot of low-value content, as AI-based generators like Jasper are already used today, but the higher-value jobs will still go to humans. And that’s a general theme.
Yes, there will be post-production jobs (rotoscoping, transcription) done by humans today which will be heavily AI-assisted tomorrow. Tools like Keyper can mask humans in realtime, WhisperAI does a spectacular job of transcription on your own computer, and there are a host of AI-based tools like Runway which can do amazing tricks. These tasks are mostly technical, though, and decent AI art is something novel. Image generators can create impressive results, albeit with many failures, too many fingers, and lingering ethical and copyright issues. But I don’t think any of these tools are going away now. Technology always disrupts, but we adapt and find a new normal. Some succeed, some fail.
A saving grace is that it’s easy to get an AI model about 95% of the way there, but, the last 5% gets a bit harder, and the final 1% is nearly impossible. Now sometimes that 5% doesn’t matter — a voice recording that’s 95% better is still way better, and a transcription that’s nearly right is easy to clean up. But a roto job where someone’s ears keep flicking in and out of existence is not a roto job the client will accept, and it’s not necessarily something that can be easily amended.
So, if AI is imperfect, it won’t totally replace humans at all the jobs we’re doing today. Many will be displaced, but we’ll get new jobs too. AI will certainly make it into consumer products, where people don’t care if a result is perfect, but to be part of a professional workflow, it’s got to be reliable and editable. There are parallels in other creative fields, too: after all, graphic designers still have a livelihood despite the web-based templated design tool Canva. Yes, Canva took away a lot of boring small jobs, but it doesn’t scale to an annual report or follow brand guidelines. The same amount of good work is being done by the same number of professionals, and there’s a lot more party invitations that look a little better.
For video, there will be a lot more AI-based phone apps that will perform amazing gimmicks. More and better TikTok filters too. There will also be better professional tools that will make our jobs easier and some things a lot quicker — and some, like the voice generation and cleanup tools, will find fans across the creative world. Still, we are a long, long way from clients just asking Siri 2.0 to make their videos for them.
Beyond video, the imperfection of AI is going to heavily delay any society-wide move to self-driving cars. The world is too unpredictable, my Tesla still likes to brake for parked cars on bends, and to move beyond “driver assistance”, self-driving tech has to be perfect. A capability to deal with 99.9999% of situations is not enough if that 0.00001% kills someone. There have been some self-driving successes where the environment is more carefully mapped and controlled, but a general solution is still a way off. That said, I wouldn’t be surprised to see self-driving trucks limited to predictable highway runs arrive soon. And yes, that will put some people out of work.
So what to do? Stay agile, be ready for change. There’s nothing more certain than change. And always remember, as William Gibson said: “The Future is Already Here, it’s Just Not Very Evenly Distributed.”
AI audio tools keep growing. Some that come to mind are Accusonus ERA (currently being bought), Adobe Speech Enhancement, AI Mastering, AudioDenoise, Audo.ai, Auphonic, Descript, Dolby.io, Izotope RX, Krisp, Murf AI Studio, Veed.io and AudioAlter. Of those, I have personally tested Accusonus ERA, Adobe Speech Enhancement, Auphonic, Descript and Izotope RX6.
I have published articles or reviews about a few of those in ProVideo Coalition.
There’s a lot of use of AI and “smart” tools in the audio space. I often think a lot of it is really just snake oil – using “AI” as a marketing term. But in any case, there are some cool products that get you to a solid starting point quickly.
Unfortunately, Accusonus is gone and has seemingly been bought by Meta/Facebook. If not directly bought, then they’ve gone into internal development for Facebook and are no longer making retail plug-ins.
In terms of advanced audio tools, Sonible is making some of the best new plug-ins. Another tool to look at is Adobe’s Podcast application, which is going into public beta. Their voice enhancement feature is available to be used now through the website. Processing is handled in the cloud without any user control. You have to take or leave the results, without any ability to edit them or set preferences.
AI and Machine Learning tools offer some interesting possibilities, but they all suffer from two biases. The first is the bias of the developers and the libraries used to train the software. In some cases that will be personal biases and in others it will be the biases of the available resources. Plenty has been written about the accuracy of dog images versus cat images created by AI tools. Or that of facial recognition flaws with darker skin, including tattoos.
The second large bias is one of recency – mainly the internet. More general and specific data is available from the last 10-20 years using internet resources, than prior. If you want to find niche information prior to the advent of the internet, let’s say before 1985, then it can be a very difficult search. That won’t be something AI will likely access. For example, if you tried to have AI mimic the exact way that Cinedco’s Ediflex software and UI worked, I doubt it would happen, because the available internet data is sparse and it’s so niche.
I think the current state of the software is getting close enough to fool many people and could probably pass the famous Turing test criteria. However, it’s still derivative. AI can take A+B and create C or maybe D and E. What it can’t do today (and maybe never), is take A+B and create K in the style of P and Q with a touch of Z. At least not without some clear guidance to do so. This is the realm of artists to be able to make completely unexpected jumps in the thought process. So maybe we will always be stuck in that 95% realm and the last 1-5% will always be another 5 years out.
Another major flaw in AI and Machine Learning – in spite of the name – is that it does not “learn” based on user training. For instance, Pixelmator Pro uses image recognition to name layers. If I drag in a photo of the Eiffel Tower it will label it generically as tower or building. If I then correct that layer name by changing it to Eiffel Tower, the software does nothing to “learn” from my correction. The next time I drag in the same image, it still gets a generic name, based on shape recognition. So there’s no iterative process of “training” the library files that the software is based on.
I do think that AI will be a good assistant in many cases, but it won’t be perfect. Rotoscoping will still require human finesse (at least for a while). When I do interviews for articles, I record them via Skype or Zoom and then use speech-to-text to create a transcript. From that I will write the article, cleaning up the conversation as needed. Since the software is trying to create a faithful transcription to what the speaker said, I often find that the clean-up effort takes more time and care than if I’d simply listened to the audio and transcribed it myself, editing as it went along. So AI is not always a time-saver.
There are certainly legal questions. At what point is an AI-generated image an outright forgery? How will college professors know whether the student’s paper is original versus something created through ChatGPT? I heard yesterday that genuine handwriting is being pushed in some schools again, precisely because of such concerns (along with the general need to have legible writing). Certainly challenging ethical times ahead.
I think that in the world of film we have a bit of breathing room when it comes to advances in AI bringing significant changes and perhaps a bit of an early warning of what might be to come. Our AI tools are largely technical rather than creative, and the creative ones less well developed compared to the image and text creation tools, so they don’t yet pose much of a challenge to our livelihoods and the legal issues aren’t as complicated. For example, AI noise reduction or upscaling – they are effectively fixing our mistakes – and there isn’t much need for the models to be trained on data they might not have legal access to (though I imagine behind the scenes this is an important subject for them, as getting access to high quality training data would Boost their product).
I see friends who are writers or artists battling to deal with the sudden changes in the AI landscape. I know copywriters whose clients are asking them if they can’t just use ChatGPT now to save them money or others saying their original writing has been falsely flagged as AI-generated by an AI analysis tool and while I’m sure the irony is not lost on them, it doesn’t lessen their stress. So in terms of livelihoods and employment I think there are real ethical issues, though I have no idea how they can be solved, aside from trusting that creative people will always adapt, though that takes time and the suddenness of all this has been hard for many.
On the legal side, I feel like there is a massive amount of catching up to do and it will be fascinating to see how these current cases work out. It feels like we need a whole new set of legal precedents to deal with emerging AI tools, aside from just what training data the models can access. Looking at the example of deepfakes, I love what a talented comedian and voice impersonator like Charlie Hopkinson can do with it – I love watching Gandalf or Obi-Wan roasting their own shows – but every time I watch, I wonder what Sir Ian McKellen would think – though somehow I think he would take it quite well. Charlie does put a brief disclaimer on the videos, but that doesn’t feel enough to me. I would have thought the bare minimum would be a permanent disclaimer watermark, let alone a signed permission from the owner of that face! I think YouTube has put some work into this, focusing more on the political or the even less savoury uses, which of course are more important, but more needs to be done.
I think we in the worlds of production and post would be wise to keep an eye on all the changes happening so we can stay ahead and make them work to our advantage.
I have been experiencing a sense of excitement and wonderment over the most accurate developments in AI.
It’s accelerating. And at the same time, I’m cynical – I’ve read/watched exciting research (sometimes from SIGGRAPH, sometimes from some smaller projects) that never seems to see the light of day.
About six years ago, I did some consulting work around machine learning and have felt like a child in a candy store, discovering something new and fascinating around every corner.
Am I worried about AI from a professional standpoint?
Nope. Not until they can handle clients.
If the chatbots I encounter are any indicators? It’s going to be a while.
For post-production? It’s frustrating when the tools don’t work. Because there’s no workaround that will fix it when it fails.
ChatGPT is an excellent example of this. It’s correct (passing the bar, passing the MCAT), until it’s confidentially incorrect. It gave me answers that just don’t exist/aren’t possible. How is someone to evaluate this?
If you use ChatGPT as your lawyer, and it’s wrong, where does the liability live?
That’s the key in many aspects – it needs guidance, a professional who knows what they’re doing.
In creating something from nothing: There are a couple of areas that are in the crosshairs
These tools excite me most in the functional areas instead of the “from scratch” perspective.
Taking painful things and reducing the difficulty.
That’s what good tools should always do, especially when they leave the artist the ability to influence the guidance.
The brightest future.
You film an interview. You read the text, clean it up, and tell a great story.
Except there’s an issue – something is unclear in the statements made. You get clearance from the interviewee about the rephrasing of a statement. Then use an AI voice model of their voice to form the words. And another to re-animate the lips to look like the subject said it.
This is “almost here.”
The dark version of this?
It’s society-level scary (but so are auto-driving cars that can’t really recognize children, which one automaker is struggling with.)
Here’s a scary version: You get a phone call, and you think it’s a parent or significant other. It’s not. It’s a cloned voice and something like ChatGPT trained on content that can actually respond in near-real time. I’ll leave the “creepy” factor up to you here.
Ethical ramifications
Jeff Foster brings up this question – what happens when we can convincingly make people say what we want?
At some level, we’ve had that power for over a decade. Just the fact that we could take a word out of someone’s interview gives us that power. It’ll just make that easier/more accessible. As well as “I didn’t say that; it was AI” being a defense.
It’s going to be ugly because our lawmakers, and our judicial system, can’t educate themselves quickly enough if the past is any indication.
Generative AI’s isn’t “one-click”
As Iain pointed out in the script he had ChatGPT write, it did the job, it found the format, but it wasn’t very good.
I wonder how it would help me around writer’s block?
Generative text is pretty scary – and may disrupt Google.
Since Google is based on inbound/outbound links – it’s going to be very soon that the blog spam will explode even more, and it’ll be harder to tell what content is well written and what is not.
Unless it comes from a specific person you trust.
And as Oliver pointed out, it’s problematic until I can train it with my data – it needs an artist.
The lack of being able to re-train will mean that failures will consistently fail. Then we’re in workaround hell.
Personally I believe that AI technologies are going to cause absolutely massive disruption not just to the production and post-production industries, but across the entire gamut of human activity in ways we can’t even imagine.
In the broadest sense, the course of evolution has been one of increasing complexity, often with exponential jumps (e.g., Big Bang, Cambrian explosion, Industrial Revolution). AI is a vehicle for another exponential leap. It is extraordinarily exciting and terrifying, fraught with danger, yet it will also create huge new opportunities.
How do we position ourselves to benefit from, or at least survive, this next revolution?
I’d suggest moving away from any task or process that AI is likely to take over in the short term. Our focus should be on what humans (currently) do better than AI. Billy Oppenheimer, in his article on The Coffee Cup Theory of AI, calls this Taste and Discernment. Your ability to connect to other humans through your storytelling, to tell the difference between the great and the good, to choose the line of dialog, the lighting, the composition, the character, the blocking, the take, the edit, the sound design…and use AI along the way to create all the scenarios from which you use your developed sense of taste to discern what will connect with an audience.
AI has already generated huge legal and ethical issues that I suspect will only grow larger. But the genie is out of the bottle – indeed he or she emerged at the Big Bang itself – so let’s work together to figure out how to work with this fast-emerging reality to continue to be storytellers that speak to the human condition.
(These words written by me with no AI assistance :-))
Keep the discussion going in the comments section or on Twitter.
Filmmakers go-to destination for pre-production, production & post production equipment!
Shop NowTech giants like Google and Microsoft and Israeli tech leaders like Wix and Fiverr see AI as an opportunity. It is a technology that has been at the center of their endeavors for years but accurate developments in the field, most notably by OpenAI, represent a threat to many tech companies.
Since OpenAI launched DALL-E last summer, which generates digital images, and even more so in accurate weeks after the hype surrounding its AI chat ChatGPT, it seems that the industry is in the midst of an earthquake. The promises that have been heard in the field for several decades are ready for use and being extensively distributed to private users. At the click of a button, anyone can get information or structured design, at a high level, on any subject. In a few years, the industry promises, we will be listening to musical compositions, playing video games, building websites and maybe even entire companies through one written command and within minutes.
Microsoft has understood the scale of the opportunity. After a series of reports from abroad, last week the tech giant confirmed a multiyear multibillion dollar investment in OpenAI, extending its partnership with the ChatGPT developer, whose expenditure is four times its revenue. The investment will reportedly total $10 billion, which would enable Microsoft to integrate the smart technology into its products. Such an investment gives OpenAI a valuation of $29 billion, and Microsoft will earn 75% of its profits until its investment is returned, and after that it will hold 49% of its shares. In 2019, Microsoft invested $1 billion in OpenAI.
Google, on the other hand, has adopted a different strategy. More than a decade ago, the tech giant already hired thousands of AI experts from academia, and according to expectations this will lead to the launch of a rival technology, which some believe will be even more advanced than the next generation of ChatGPT. But its challenge is mainly regarding investors and shareholders. The intelligence created could disrupt the interface presenting its ads - instead of a page with 10 search results and paid links, there would be a paragraph or more of structured text.
In addition, the new technology allows AI to run on every computer - a decentralized model that challenges the nature of Google's centralized calculations, where everything is conducted in closed and remote server farms.
Social networks have a problem as well as Google
The new developments also pose an existential threat to social networks, such as Twitter, Instagram and Tiktok. Companies and intelligence agencies may use AI chat and the like in order to spread false information or echo content as they wish through a network of tweets, shares, reactions and likes by setting up smart bots that write as humans at inhuman rates for a range of profiles.
Beyond content flooding, this is also a financial threat to Twitter, Facebook and other social networks. Flooding the network with more and more automated content can disable the servers, or at the very least cause malfunctions.
Genuine threat posed to image design market
AI-based image generation, like those of Midjourney and Stable Diffusion may also significantly harm companies engaged in designing and editing images. Adobe, for example, might find itself struggling to keep customers and users who leave for the free engines that produce basic collages and change lighting for an image artificially.
Business users seek quality results that already today can produce quality advertisements generated by Israeli customized visual content at scale company Bria, or use different tools according to the products and their brand language to create advertising ads using another Israeli company Astria, which allows users to produce a series of high quality fictional images based on the input of existing product images or presenters, after a brief selection process.
Israeli company Lightricks, which is mainly known for its selfie editing app Facetune, has been compelled to rethink its way forward following this technological earthquake. Lightricks CEO Zeev Farbman told "Globes," "We are in the midst of a revolution. In every board of directors or management meeting we talk about AI - there is a paradigm shift here, even at the level at which our cloud resources are used."
Maybe due to the fact that its managers came from the field of image processing in academia, Lightricks's entrepreneurs were exposed to AI even before the hype. Based on Stable Diffusion's open source code Lightricks is developing its model to perform several effects, such as options to alter hairstyles virtually after a photo has been taken, or changing the color and texture of clothing. More recently, the company has even begun advertising its Photoleap design software, as a kind of self-generating AI engine for desktop computers that offers similar capabilities to those of Midjourney and Stable Diffusion.
Farbman says, "It's a wakeup call for the industry. Everyone understands that they must integrate the closed box that they have supposedly produced or developed and seamlessly combine it with advanced AI tools. If they don't do this, and if they don't change deeply, it will be very easy to disrupt what they are doing."
Wix has reasons for concern and also for security
Israeli company Wix.com Ltd. (Nasdaq: WIX), which allows small and medium-sized businesses to set up their own websites for selling their products has for a long time been thinking how it can digest the new technology. The company is ostensibly under threat. Startups are currently raising millions of dollars for services that allow sophisticated websites to be set up at the press of a button. Durable, for example, from San Francisco recently raised $6.5 million for "services to set up websites within 30 seconds." Its AI engine locates relevant keywords, writes paragraphs, locates free images to use and integrates forms for registering potential customers. Israeli company SPIRITT, which allows companies to set up an app within a week, is also apparently taking a bite out of Wix's market, although it mainly challenges software houses.
But DisruptiveAI venture capital fund general partner Tal Barnoach, who has recently made several investments in AI, stresses the unique power of Wix. He says that the Israeli company has successfully positioned itself as ca brand, in contrast to many of its rivals. "The number of brands in the field is very limited and Wix is one of them. A brand has major importance in any decision by a business in where to set up their website. They seek a safe brand that will ensure the existence and stability of the site."
Despite that Wix could be harmed by a long list of complementary services that can to a certain extent be replaced by a machine such as the field of design and marketing content.
Wix has tried to preempt any damage by launching the DreamUP image, through its DeviantArt online art gallery subsidiary, which is used by a large community of artists. But the launch ran into an unexpected obstacle: several artists attached Wix to a class-action lawsuit in the US, which was also aimed at other image generators, alleging that it allowed their works to train the generator in order to push them out of the market. The lawsuit has not yet been approved by court, and Wix can argue in its defense that the launch of the tool opened up the possibility for artists to block the use of their protected works in image generators.
The immediate winners and the rise of freelancers
The creative AI market is still in its infancy but according to DisruptiveAI general partner Yorai Fainmesser, no few startups of which dozens are Israeli have already succeeded in establishing themselves in the industry and some of them are profitable. Most of them are based on high-level creative AI to Boost processes for businesses and organizations. After an extensive period in which these companies wandered in the "desert" due to the skepticism in the market for such technologies, the hype of the last few months has brought many customers to look for more and more AI solutions, which has led to the rapid growth of those companies.
Another market that may fundamentally change following these latest developments, and in particular those in the field of text and image production, is that of consultants, contractors and freelancers. The new technologies may bite into the market of companies like Fiverr (NYSE: FVRR), which mediates between freelancers around the world. First, the time and cost savings brought by the new technologies may make the services offered on Fiverr's online market cheaper. Secondly, potential customers may supply up using a service provider through Fiverr and produce content and images themselves using AI generators.
Fiverr readied for this revolution earlier this year by launching several tools that will help service providers enhance their work, such as a logo generator and graphic design, as well as a system for narrators that reads entire texts aloud. This week, Fiverr became the first freelancer platform to open a service providers category in AI. Customers can find content editors, illustrators and designers there, as well as developers who use AI tools.
Fiverr CEO Micha Kaufman told "Globes," "Fiverr does not need to hand out tools like Midjourney or DALL-E to users or freelancers. But a layer of artists and experts for these tools has formed here, talents who know how to use them much better than me and you. We will help them reach a market that is currently thirsty for such service providers."
The developments already biting into the gaming industry
New trends in the gaming and 3D worlds also threaten the existence of the gaming giants, including Playtika (Nasdaq: PLTK), Moon Active, and King.com. The success of such companies depends more on a particular game going viral rather than engineering achievements. Also behind some of the innovations in this field is OpenAI, which developed the Point-E engine that is capable of creating 3D objects with a single written command.
US startup Latitude raised $3 million for an engine that allows users to create relatively simple adventure games based on ChatGPT and image generators. Players choose at each stage one of the options presented textually to them, and then the reality is brought to life by the graphics generator. Scenario raised $6 million to make it easier for game developers and to shorten design and animation processes that until now were done manually, by improving machine training processes.
The big Israeli gaming companies are in no rush to join the wave and do not yet produce games using creative AI, but are exploring its internal use - such as through the "CoPilot" software development product of GitHub and Microsoft, or in the field of marketing in the personalized accessibility of creative for different types of users. In the gaming industry, there is currently a lot of skepticism regarding the ability of creative AI to replace the existing industry. It is not sensitive to complex issues such as content adapted to different ages, or issues of race and nationality. At the same time, the gaming companies claim that the animation and image generators will definitely serve the existing industry and help Boost its products.
Playtika VP AI Assaf Asbag was familiar with the technology and its capabilities long before it became well known due to the engines that were opened to the public last summer. "Creative AI is not a new phenomenon. Admittedly, we gave it a name recently, but it's an amazing process that has been going on here for many years." Asbag does not fear new wave of creative engines: "It's an exciting time, and I'm curious to be a part of it. There is no doubt that AI will bring about a revolutionary change in the world of entertainment and games, as in many other fields such as autonomous vehicles, drug discovery, robotics and space exploration."
Published by Globes, Israel business news - en.globes.co.il - on January 31, 2023.
© Copyright of Globes Publisher Itonut (1983) Ltd., 2023.