Yuga Labs, the parent company of Bored Ape Yacht Club, said in a new court filing that it does not have “copyright registrations” for the 10,000 images that constitution the successful NFT collection.
The new documents were submitted as part of the ongoing lawsuit between Yuga and artist Ryder Ripps, who used images from the BAYC collection for his own NFT collection, titled RR/BAYC.https://www-artnews-com.cdn.ampproject.org/c/s/www.artnews.com/art-news/news/yuga-labs-admits-to-having-no-copyright-over-bored-ape-yacht-club-nfts-1234655279/amp/
On futures and foresight, the focus of my own queries, ChatGPT readily admits that it’s not a crystal ball (a good start). As a language model, “it does not have the ability to predict future events or to understand the long-term consequences of actions.” Or so it tells me. It will even qualify any future-oriented questions with a standard disclaimer, “It’s hard to predict the future of [X] as it will depend on many factors…”
ChatGPT also confesses to other things it can’t do. It can’t understand what it says — neither the meaning nor concepts of its utterances. It can’t fact-check — at least not yet. By its own admission, it “does not have the ability to reason, plan or solve problems in the same way that a human can.”https://www.iftf.org/insights/how-to-use-chatgpt-for-strategic-foresight-limitations-possibilities-and-workarounds/
In 2023, emotional AI—technology that can sense and interact with human emotions—will become one of the dominant applications of machine learning. For instance, Hume AI, founded by Alan Cowen, a former Google researcher, is developing tools to measure emotions from verbal, facial, and vocal expressions. Swedish company Smart Eyes recently acquired Affectiva, the MIT Media Lab spinoff that developed the SoundNet neural network, an algorithm that classifies emotions such as anger from audio samples in less than 1.2 seconds. Even the video platform Zoom is introducing Zoom IQ, a feature that will soon provide users with real-time analysis of emotions and engagement during a virtual meeting.https://www-wired-com.cdn.ampproject.org/c/s/www.wired.com/story/artificial-intelligence-empathy/amp
“Writing a good song is not mimicry, or replication, or pastiche, it is the opposite,” he wrote. “It is an act of self-murder that destroys all one has strived to produce in the past. It is those dangerous, heart-stopping departures that catapult the artist beyond the limits of what he or she recognises as their known self.
“This is part of the authentic creative struggle that precedes the invention of a unique lyric of actual value; it is the breathless confrontation with one’s vulnerability, one’s perilousness, one’s smallness, pitted against a sense of sudden shocking discovery; it is the redemptive artistic act that stirs the heart of the listener, where the listener recognizes in the inner workings of the song their own blood, their own struggle, their own suffering.”https://www.theguardian.com/music/2023/jan/17/this-song-sucks-nick-cave-responds-to-chatgpt-song-written-in-style-of-nick-cave
The idea of digital twins — digital representations of physical systems, products or processes that serve as indistinguishable counterparts for purposes such as simulations, testing, monitoring and maintenance — has been around for some time. But indications are the concept’s time has come for wider adoption to support business applications.https://frankdiana.net/2023/01/24/digital-twins-are-set-for-rapid-adoption-in-2023/
AI researchers often say good machine learning is really more art than science. The same could be said for effective public relations. Selecting the right words to strike a positive tone or reframe the conversation about AI is a delicate task: done well, it can strengthen one’s brand image, but done poorly, it can trigger an even greater backlash.
The tech giants would know. Over the last few years, they’ve had to learn this art quickly as they’ve faced increasing public distrust of their actions and intensifying criticism about their AI research and technologies.https://www.technologyreview.com/2021/04/13/1022568/big-tech-ai-ethics-guide/?utm_medium=tr_social&utm_campaign=site_visitor.unpaid.engagement&utm_source=Twitter&s=09
Consumers are wary of the recommendations made by algorithms. But according to new research co-authored by Yale SOM’s Taly Reich, showing that an algorithm can learn—that it improves over time—helps to resolve this distrust.https://insights.som.yale.edu/insights/building-trust-with-the-algorithms-in-lives
Flexibility is often used loosely to mean not working in the office. We’re all still having those, ‘How many days are you in?’ conversations as hybrid working continues to evolve. Post-pandemic, however, expectations of flexible working are much broader than before. Recognizing this, Unilever’s approach is more holistic.
External data and commentary suggest that most people look for what we call ‘everyday’ flexibility – the ability to manage and adjust start and finish times, the freedom to manage work commitments in and around life commitments and for work to be measured in outputs delivered, rather than hours worked.https://www.weforum.org/agenda/2023/01/flexible-working-productivity-and-growth-davos23?utm_source=linkedin&utm_medium=social_scheduler&utm_term=Davos+2023&utm_content=24/01/2023+05:00
Related report: Work in the Pandemic Age
Across the country, university professors like Mr. Aumann, department chairs and administrators are starting to overhaul classrooms in response to ChatGPT, prompting a potentially huge shift in teaching and learning. Some professors are redesigning their courses entirely, making changes that include more oral exams, group work and handwritten assessments in lieu of typed ones.https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html