New technologies can create both benefits and liabilities. In the case of AI, there are some concerns that such technology may create unintended consequences.
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?”
Already, efforts have emerged overseas to regulate AI systems. Because information and data move across national boundaries with electronic speed, it’s unclear how such bans might work or how they could be enforced.
There are also more concerns over AI now beginning to arise, including:
Misuse — It’s estimated that Google stops 100 million spam messages daily and worries that, with AI, we could see even more spam. This isn’t just a text problem; it also applies to the creative world, where photographs and video can be tampered with and compromised to produce fake images such as a widely re-posted “photo” of Pope Francis wearing a designer coat.
Consider, for example, the new ability to realistically duplicate voices. “A scammer,” says the FTC, “could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member’s voice — which he could get from content posted online — and a voice-cloning program. When the scammer calls you, he’ll sound just like your loved one.”
Also in DC, at the Office of the Comptroller of the Currency (OCC), a bank regulator established in 1863 but keeping up with the times, a new Office of Financial Technology has been established to review “trends in financial technology, emerging and potential risks, and the potential implications for OCC supervision.”
What we’ve seen so far from regulators is just the start. In April 2023, the Consumer Financial Protection Bureau, the Justice Department’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission issued a joint statement announcing their intention to enforce federal laws relating to AI and civil rights, fair competition, consumer protection, and equal opportunity. If AI’s impact is as broad as many predict, then it follows that every federal department, agency, and bureau will be looking at the new technology to assure marketplace fairness.
Accuracy — “OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning,” according to Noam Chomsky, Ian Roberts, and Jeffrey Watumull. Writing in The New York Times, they explain that AI systems “take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought.” While the work created has been marveled by some the maintenance to keep the systems running is enormous and thousands of Google’s AI contractors say they are underpaid, overworked, and ‘scared’.
But even when responses may seem authoritative and logical, they may not be correct. ChatGPT itself states that it “sometimes writes plausible-sounding but incorrect or nonsensical answers.”
“ChatGPT doesn’t try to write sentences that are true,” said Gizmodo in February. “But it does try to write sentences that are plausible.”
In other words, as good as AI systems are today — and they are remarkably good — their responses are not yet a sure thing.
Mirages — AI with the broad access we’re now seeing is new and different. Rather than produce repetitive answers from a number of pre-programmed options, these systems review massive volumes of information to form unique responses, including some that are may be considered strange.
Among other results, there have been messages of love (The New York Times) and towering offense. According to NPR, one Associated Press reporter was told by an AI system that he was ugly, short, overweight, and non-athletic.
“Because of the surprising way they mix and match what they’ve learned to generate entirely new text, they often create convincing language that is flat-out wrong, or does not exist in their training data. A.I. researchers call this tendency to make stuff up a ‘hallucination,’ which can include irrelevant, nonsensical, or factually incorrect answers,” said The New York Times.
AI Evolution & Jobs — Automation can be seen as a threat to existing jobs but could also become a significant source of assistance in many fields. While some bank tellers have been replaced by ATMs while self-checkouts have reduced the need for cashiers, many fields are using it to their advantage. What makes AI unique is the sheer size and scope of the change it might induce.
“If generative AI delivers on its promised capabilities,” said a recent Goldman Sachs economic research report, “the labor market could face significant disruption. Using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation.”
AI will surely influence the future of work but — like other technological revolutions — it also has the potential to create large numbers of additional jobs. Personal computers, as one example, wiped out steno pools and manual typewriters but led to armies of support technicians, digital marketers, and Web developers.
“Recent AI advances, while seemingly impressive, are very narrow in scope and require a lot of human supervision and input to work in real applications,” says Skynet Today, adding:
- “While as many as 47% of current jobs contain tasks that may be automatable, less than 5% of jobs will be fully automatable by 2030.”
- “The actual percentage of jobs that will be automated will be lower, because technology adoption lags behind technology development due to costs in implementation, maintenance, and overcoming cultural and regulatory hurdles.”
Although jobs will be lost as we transition into a brave new world of AI, the future of work will likely see the development of entirely new industries, businesses, and employment options.
For example, the US workforce included 130.7 million employees in 1998, the year Google was established. And yet — despite the huge expansion of new technologies — employment reached 160.1 million at the start of 2023. On balance, AI will hopefully follow in the path of past technological breakthroughs and become a bountiful source of fresh jobs, widespread prosperity, and new CRE opportunities.
Who Owns AI Data?
AI is based on data and that data comes from somewhere. Who, or what, can copyright materials obtained or created by AI systems? And how much — if anything — should content originators be paid for their work?
“The legal implications of using generative AI are still unclear,” explains an April article in the Harvard Business Review (HBR), “particularly in relation to copyright infringement, ownership of AI-generated works, and unlicensed content in training data.”
HBR authors Gil Appel, Juliana Neelbauer, and David A. Schweidel add that “courts are currently trying to establish how intellectual property laws should be applied to generative AI, and several cases have already been filed. To protect themselves from these risks, companies that use generative AI need to ensure that they follow the law and take steps to mitigate potential risks, such as ensuring they use training data free from unlicensed content and developing ways to show provenance of generated content.”
At this point, the AI adventure has just begun. We know that something looms ahead on the AI front, but so far, the boundaries and definitions of what’s to come are fuzzy and unclear. For CRE, the possibilities are out there. For example, a recent report from Matthews highlights that AI can optimize property management in CRE by analyzing data on past and present property performance, predicting future trends, and providing insights on maintenance, energy management, tenant management, and security. This can lead to better decision making, improved efficiency, and enhanced tenant experience. The more we continue to adopt AI into CRE the more we will find ways to optimize it for our partners, tenants and employees.