Jump to content




ResidentialBusiness

Administrators
  • Joined

  • Last visited

Everything posted by ResidentialBusiness

  1. Why are some jobs better than others? Well, it largely depends on people’s preferences. In other words, one person’s dream job may be another person’s nightmare. And yet, there are also clearly some universal or at least generalizable parameters that make most people accept the idea that some jobs are objectively better than others — or at least seen by most as generally preferable. Pay and purpose For example, jobs that pay well, offer stability, and provide opportunities for growth are almost universally considered better. A tenured professorship, a senior engineering role at a reputable company, or a stable medical position all combine financial security with long-term prospects and prestige. In contrast, poorly paid, insecure, or dead-end roles (like gig work with no benefits or exploitative manual labor with long brutal shifts and an alienating experience) are widely viewed as worse, even if a few individuals might value their flexibility or simplicity. Then there’s autonomy. Jobs that grant people a degree of control over how and when they work (e.g., creative professionals, entrepreneurs, and researchers) tend to score higher on satisfaction than those defined by micromanagement or rigid supervision. Autonomy is a proxy for trust and respect, and it correlates strongly with both engagement and mental health. Few people dream of jobs where every move is monitored, and most aspire to roles where they can think, decide, and act freely. Unsurprisingly, purpose matters, too. Occupations that contribute to something meaningful (whether saving lives, advancing knowledge, or building something lasting) are viewed as more fulfilling than those that feel transactional or pointless. A teacher inspiring students, a scientist developing a vaccine, or an architect designing a community space are all examples of work that confers a sense of legacy. By contrast, even lucrative jobs can feel hollow when they lack purpose or moral value. This may explain the low correlation between pay and job satisfaction, which highlights the fact that we tend to overestimate the importance of compensation when making career choices. In that sense, the “best” jobs aren’t just about rewards, but about how they make people feel about themselves and their place in the world. What the science says A good way to acknowledge these nuances, and yet still predict whether a person is likely to access better jobs, is to examine why some individuals have more choices than others. That is, in any job or labor market, available job or career opportunities may have different degrees of appeal or attractiveness; but from a job-seekers perspective, the more employable you are, the more likely to are to find and maintain a desirable job – whether we look at subjective or objective dimensions of desirability. With this in mind, here are some critical learnings about the science of employability that explain why certain people are better able to access in-demand jobs: (1) Their personality Research has consistently shown that employability is largely a function of personality. Traits such as conscientiousness, emotional stability, curiosity, and sociability predict not only who gets hired, but also who thrives once employed. Personality shapes reputation (the way others see us) and reputation determines whether we are trusted, promoted, and retained. For instance, people who are reliable, calm under pressure, and open to learning tend to be more employable than those who are erratic, avoid feedback, or difficult to work with. Moreover, personality also predicts job satisfaction: even in objectively good jobs, neurotic or disagreeable people are less likely to feel content, whereas optimistic and adaptable individuals find meaning in a wider range of roles, and are resilient if not satisfied even with jobs that make most people miserable. In short, who you are determines both the jobs you can get and how you feel about them once you do. (2) Their social class While most advanced economies like to think of themselves as meritocracies, the data on social mobility suggest otherwise. In the United States, only about half of children born to parents in the bottom income quintile will ever move up the ladder, and just 7% will reach the top quintile. In the UK, the “class pay gap” between working-class and professional backgrounds persists even among graduates. Privilege still buys access to education, networks, internships, and employers willing to take a chance. Sociologists call this social capital; in plain terms, it means your parents’ contacts and credentials still matter more than your own potential. The world may be trending toward meritocracy, but it hasn’t quite arrived there yet. (3) Where you are born Location remains one of the most powerful predictors of career outcomes. The “Where-to-Be-Born Index” ranks countries by the opportunities they afford their citizens, and being born in Switzerland, Denmark, or Singapore gives you exponentially better odds of landing a good job than being born in Haiti, South Sudan, or Bhutan. Access to education, infrastructure, technology, and basic security all shape employability. The same talent, if born in a country with weak institutions or unstable governance, is far less likely to achieve its potential. In that sense, geography is more likely than talent to mean destiny, at least until global mobility or remote work meaningfully narrow the gap. (4) Their values, interests, and preferences Even within similar contexts, people differ in what they want from work. Psychologists like Shalom Schwartz and Robert Hogan have shown that our motivational values (e.g., achievement, power, altruism, security, stimulation, and so forth) determine what “fit” looks like for us. Someone who values adventure and creativity will flourish in start-ups or design roles, while a person who craves structure and predictability may prefer government or finance. Misalignment between values and job environment (say, a highly independent person in a bureaucratic culture) leads to burnout or disengagement. The better your job matches your values, the more likely you are to perceive it as a good one. Adapt, evolve, and improve In the end, “better jobs” are not just better paid or better designed; they’re better matched to the people who hold them. Some of this is luck: being born in the right family, in the right country, with the right temperament, will simply afford you a higher range and choice of matches, so you are bound to find more options. But much of it also depends on deliberate self-awareness, namely understanding what kind of environments bring out the best in you, and aligning your career moves accordingly. From a societal perspective, the goal should be to expand access to good jobs by improving education, reducing inequality, and helping people develop the skills and traits that make them employable. That means focusing less on pedigree and more on potential, less on connections and more on competence. Ultimately, the world of work will never be perfectly fair, but it can be fairer. And while none of us can control where we start, we can control how we grow. The most employable people are not just those who fit the system, but those who learn to adapt, evolve, and turn whatever job they have into something better. View the full article
  2. When Elon Musk launched Grokipedia, his AI-generated encyclopedia intended to rival Wikipedia, it was not just another experiment in artificial intelligence. It was a case study in everything that can go wrong when technological power, ideological bias, and unaccountable automation converge in the same hands. Grokipedia copies vast sections of Wikipedia almost verbatim, while rewriting and “reinterpreting” others to reflect Musk’s personal worldview. It could genuinely be conceived as the antithesis of everything that makes Wikipedia good, useful, and human. Grokipedia’s edits aggressively editorialize topics ranging from climate change, to immigration, to (of course) the billionaire’s own companies and bio. The result is less an encyclopedia than an algorithmic mirror of one man’s ideology. A digital monument to self-confidence so unbounded that it might make a Bond villain blush. From collaboration to colonization Wikipedia remains one of humanity’s most extraordinary collective achievements: a global, volunteer-driven repository of knowledge, constantly refined through debate and consensus. Its imperfections are human, visible, and correctable. You can see who edited what, when, and why. Grokipedia is its antithesis. It replaces deliberation with automation, transparency with opacity, and pluralism with personality. Its “editors” are algorithms trained under Musk’s direction, generating rewritten entries that emphasize his favorite narratives and downplay those he disputes. It is a masterclass in how not to make an encyclopedia, a warning against confusing speed with wisdom. In Grokipedia, Musk has done what AI enables too easily: colonize collective knowledge. He has taken a shared human effort, open, transparent, and collaborative, and automated it into something centralized, curated, and unaccountable. And he has done so doing the absolute minimum that the Wikipedia copyleft license requires, in extremely small print, in a place where nobody can see it. The black box meets the bullhorn This is not Musk’s first experiment with truth engineering. His social network, X, routinely modifies visibility and prioritization algorithms to favor narratives that align with his worldview. Now Grokipedia extends that project into the realm of structured knowledge. It uses the language of authority, such as entries, citations, summaries, to give bias the texture of objectivity. This is precisely the danger I warned about in an earlier Fast Company article: the black-box problem. When AI systems are opaque and centralized, we can no longer tell whether an output reflects evidence or intention. With Grokipedia, Musk has fused the two: a black box with a bullhorn. It is not that the platform is wrong on every fact. It is that we cannot know which facts have been filtered, reweighted, or rewritten, or according to what criteria. Or worse, we can have the intuition that the whole thing starts with a set of commands that completely editorialize everything. The line between knowledge and narrative dissolves. The ideological automation problem The Grokipedia project exposes a deeper issue with the current trajectory of AI: the industrialization of ideology. Most people worry about AI misinformation as an emergent property: something that happens accidentally when models hallucinate or remix unreliable data. Grokipedia reminds us that misinformation can also be intentional. It can be programmed, curated, and systematized by design. Grokipedia is positioned as “a factual, bias-free alternative to Wikipedia.” That framing is itself a rhetorical sleight of hand: to present personal bias as neutrality, and neutrality as bias. It is the oldest trick in propaganda, only now automated at planetary scale. This is the dark side of generative AI’s efficiency. The same tools that can summarize scientific papers or translate ancient texts can also rewrite history, adjust emphasis, and polish ideology into something that sounds balanced. The danger is not that Grokipedia lies, but that it lies fluently. Musk, the Bond villain of knowledge There’s a reason Musk’s projects evoke comparisons to fiction: the persona he has cultivated, the disruptor, the visionary, the self-styled truth-teller, has now evolved into something closer to Bond villain megalomania. In the films, the villain always seeks to control the world’s energy, communication, or information. Musk now dabbles in all three. He builds rockets, satellites, social networks, and AI models. Each new venture expands his control over a layer of global infrastructure. Grokipedia is just the latest addition: the narrative layer. If you control the story, you control how people interpret reality. What AI should never be Grokipedia is a perfect negative example of what AI should never become: a machine for amplifying one person’s convictions under the pretense of collective truth. It is tempting to dismiss the project as eccentric or unserious. But that would be a mistake. Grokipedia crystallizes a pattern already spreading across the AI landscape: many emerging AI systems, whether from OpenAI, Meta, or Anthropic, are proprietary, opaque, and centrally managed. The difference is that Musk has made his biases explicit, while others keep theirs hidden behind corporate PR. By appropriating a public commons like Wikipedia, Grokipedia shows what happens when AI governance and ethics are absent: intellectual resources built for everyone can be re-colonized by anyone powerful enough to scrape, repackage, and automate them. The Wikipedia contrast Wikipedia’s success comes from something AI still lacks: accountability through transparency. Anyone can view the edit history of a page, argue about it, and restore balance through consensus. It is messy, but it is democratic. AI systems, by contrast, are autocratic. They encode choices made by their creators, yet present their answers as universal truth. Grokipedia takes this opacity to its logical conclusion: a single, unchallengeable version of knowledge generated by an unaccountable machine. It’s a sobering reminder that the problem with AI is not that it’s too creative or too powerful, but that it’s too easy to use power without oversight. Lessons for the AI era Grokipedia should force a reckoning within the AI community and beyond. The lesson is not that AI must be banned from knowledge production, but that it must be governed like knowledge, not like software. That means: Transparency about data sources and editorial processes. Pluralism — allowing multiple voices and perspectives rather than central control. Accountability, where outputs can be audited, disputed, and corrected. And above all, humility: the recognition that no single person, however brilliant, has the right to define what counts as truth. AI has the potential to amplify human understanding. But when it becomes a tool of ideological projection, it erodes the very idea of knowledge. The moral of the story In the end, Grokipedia will not replace Wikipedia: it will stand as a cautionary artifact of the early AI age, the moment when one individual mistook computational capacity for moral authority. Elon Musk has built many remarkable things. But with Grokipedia, he has crossed into the realm of dystopian parody: the digital embodiment of the Bond villain who, having conquered space and social media, now seeks to rewrite the encyclopedia itself. The true danger of AI is not the black box. It’s the person who owns the box and decides what the rest of us are allowed to read inside it. View the full article
  3. Hello and welcome to Modern CEO! I’m Stephanie Mehta, CEO and chief content officer of Mansueto Ventures. Each week this newsletter explores inclusive approaches to leadership drawn from conversations with executives and entrepreneurs, and from the pages of Inc. and Fast Company. If you received this newsletter from a friend, you can sign up to get it yourself every Monday morning. Glenn Fogel joined dot-com darling Priceline in early 2000, a year after the “name your price” travel site’s blockbuster initial public offering (IPO). “I joined one week before the Nasdaq peaked,” Fogel recalls. Within a year of his arrival, the stock had cratered to $6 a share. By March 2002, the Nasdaq, a proxy for the burgeoning e-commerce and tech infrastructure companies that went public, plunged 77% from its March 2020 highs. Quips Fogel: “At the time, my mother was wondering whether I still had a job.” Today, Fogel is CEO and president of Booking Holdings—parent of Priceline, KAYAK, Booking.com, OpenTable, and other brands. His experience navigating the dotcom bubble (more on that in a moment) affords a compelling perch from which to observe the current generative artificial intelligence (gen AI) boom. He sees parallels in the gold-rush mentality of both booms: “There’s lots of investments, lots of new companies,” he says. “Many of them will not make it. Many investors will lose money.” Corporate investment in AI reached $252.3 billion, and private investment in gen AI reached $33.9 billion in 2024, according to data compiled by the Stanford Institute for Human-Centered artificial intelligence. The key difference between the dotcom bubble and now? “I would say in terms of the possibility for human society, I think the possible transformations from gen AI are so much greater than what was possible from the [startups of] the nineties,” he says. Fogel points to breakthroughs like Google’s AlphaFold model, which decoded protein folding and could accelerate drug discovery. “Every area really of our society can be greatly improved by using gen AI,” he says. “That’s the thing that’s so exciting.” Happy travelers In travel, the stakes may not be as high, but the impact on daily life could be profound. “Maybe we’re not going to save a lot of lives the way that the healthcare industry is going to be able to do, but maybe we’ll make the experience much happier,” he says. Indeed, the company is already deploying AI to reduce customer-service wait times, using gen AI chatbots that can solve problems instantly. When a human agent does handle a call, the bots generate conversation summaries and next steps—work that previously consumed significant amounts of agent time. Embracing emerging technology has been key to Booking Holdings’s longevity. When predecessor company Priceline Group bought Booking.com in 2005, it acquired Booking’s prowess in leveraging Google’s paid search and platforms that enabled the business to rapidly test messaging to optimize conversion rates. The company subsequently bought travel search engine KAYAK in 2013 and restaurant reservation platform OpenTable in 2014. Priceline Group changed its name to Booking Holdings in 2018. The long view Travel itself is currently experiencing a boom. Despite economic uncertainty, U.S. consumers, especially those at the high-end of the market, are prioritizing travel, with airlines and hotels indicating strong demand for premium products. Indeed, at the end of October, Booking Holdings reported better-than-expected third-quarter earnings and said it continues to see “steady travel demand trends” in the current quarter. Having led Booking Holdings through the dotcom boom and bust—as well as the COVID-19 pandemic, which led to a near complete shutdown of travel—Fogel acknowledges that nothing goes up forever. “I don’t know when those bad times are going to come, but they’re going to come sometimes,” he says. Still, he takes the long view: “I do know, in the long run, travel is always going to increase. It is human nature . . . people wanting to travel.” This time it’s different? Do you agree that the societal benefits of gen AI companies and technologies dwarf the contributions of the dotcoms? If so, what breakthroughs excite you most? Send your examples to me at stephaniemehta@mansueto.com. I’d love to share your scenarios in a future newsletter. Read more: bubble theories Why the AI-fueled stock market isn’t a bubble waiting to pop There isn’t an AI bubble. There are three Are we in an AI bubble? View the full article
  4. Below, Gene Ludwig shares five key insights from his new book, The Mismeasurement of America: How Outdated Government Statistics Mask the Economic Struggle of Everyday Americans. Gene is the former Comptroller of the Currency and founder of the Ludwig Institute for Shared Economic Prosperity (LISEP), a nonprofit dedicated to uncovering the truths that official statistics too often obscure. His writing has appeared in The New York Times, The Wall Street Journal, The Atlantic, Politico, The Financial Times, and TIME. What’s the big idea? Americans keep hearing that the economy is strong. Unemployment is low. Wages are rising. Growth is steady. But for millions of families, those headlines feel like a cruel joke. The cost of rent, groceries, and healthcare keep climbing while steady, well-paid work remains out of reach. The disconnect isn’t just perception—it’s baked into the way we measure economic success. Listen to the audio version of this Book Bite—read by Gene himself—below, or in the Next Big Idea App. 1. We are at an economic tipping point Throughout history, when governments fail to fully appreciate the realities faced by their people, it leads to crisis. The United States may be on the brink of such economic and societal unrest. The unrest that led to the French Revolution and the economic imbalances preceding the Great Depression are both cases in point. In the late eighteenth century, the oppressive economic situation facing the French people went unacknowledged by the royal family for decades. The French ruling class considered the truth about the nation’s fiscal crisis to be nefarious—a threat to their power. Marie Antoinette, when told the peasants had no bread, replied, “Let them eat cake!” Whether or not the remark is literal or legend, it captures the ruling class’s indifference. Soon after, the Revolution erupted, bringing turmoil and suffering to French citizens of every rank and station. The same narrative arc applied a century and a half later when the Great Depression loomed. In both instances, economic data that could have set off alarm bells was available—more accurate figures that would have revealed the risks emerging—and this perspective might have prompted action that could have softened the blow, if not avoided the crises altogether. But the data was either confusing, confounded with other contrary data, or affirmatively hidden. The effects were catastrophic. 2. A quarter of Americans are functionally unemployed The unemployment statistics our government releases monthly are misleading. If someone is looking for full-time employment but finds nothing except a single hour of work in a week, they are considered “employed” in the eyes of the government. For purposes of official government statistics, this one-hour employee is in the same category as someone secure in a full-time job. This logic extends to wages. Someone who works full- or part-time for a salary that falls below the poverty line (around $25,000 a year for a three-person household) is classified the same way as someone earning $1 million every month. “The United States may be on the brink of such economic and societal unrest.” LISEP’s research team and I consider anyone in the previous two situations to be functionally unemployed. The government’s most recent unemployment rate is 4.3 percent, but our research finds that 24.7 percent of American workers are functionally unemployed. 3. Pay statistics ignore part-time and unemployed job seekers The government reports on “median wages” every quarter. The idea behind their metric is simple and straightforward: If you line up all full-time employees in order of their weekly earnings, the person directly in the middle earns the median wage. But this statistic only considers the wages of people who are currently employed full-time, overlooking millions of part-time workers and unemployed job seekers. So, the moment a low-wage factory worker receives a pink slip, her salary is deleted from the sample altogether. The moment a farm worker’s seasonal employment ends, his salary is similarly deleted. What this means is the official earnings measure shows an overstated wage that doesn’t reflect the reality for many low- and middle-income Americans. It can even appear to improve during economic downturns because low-wage workers are disproportionately affected by layoffs. When the economy went into near freefall during the early months of the COVID-19 pandemic, government-reported median earnings rose seven percent. During that same period, the percentage of functionally unemployed Americans rose from 25.7 percent to 32.8 percent. 4. Yes, your groceries are more expensive When people talk about inflation, they’re usually referring to changes in the Consumer Price Index, or CPI. The CPI tracks the prices of some 80,000 goods and services, from apples to apartments, baby formula to boats, and much more. The idea is that it gives us a single figure to measure the changing cost of a basket of all consumer products. “CPI obscures the true cost of living for working-class Americans.” This basket is so wide-ranging that it doesn’t reflect how “ordinary” consumers experience cost-of-living changes, as most Americans are not buying 80,000 things. If the costs of second homes tripled while everything else in the basket stayed flat, the average American household wouldn’t feel a thing—the price hike would get averaged in, but it wouldn’t impact their life. But the opposite is true. Over the past two decades, the price of jewelry has risen by about 39 percent, while essential goods like bread are up by 112 percent and ground beef by 155 percent. When these items are measured alongside each other in the CPI, the relative stability of luxury items masks the inflation faced by Americans of more modest means. From 2001 to 2023, the CPI points to a 72 percent rise in living costs, yet our analysis of essential expenses—housing, food, transportation, healthcare, and other basics—shows those costs climbed 97 percent. CPI obscures the true cost of living for working-class Americans. 5. We need better statistics The headline statistics we currently employ to understand America’s economy are profoundly misleading and, unfortunately, drive policy. The CPI is pivotal in determining Social Security Benefits, as well as who qualifies for the Supplemental Nutrition Assistance Program, Head Start, and Pell Grants. At least twelve states and Washington, D.C., used the CPI to determine minimum wage. Our failure to produce statistics that accurately reflect the nation’s economic reality makes it much harder to shape highly effective policy responses—and harder to identify the tipping point of economic and social unrest. Simply put, when you aim at the wrong target, you miss. “Human nature favors expeditious, rosy analysis rather than the rigor required to glean accuracy.” Flaws in widely accepted economic statistics impede important decision-making. In many cases, those who accept economic misrepresentations do so for benign reasons: The data is too difficult to collect with sufficient regularity or precision, or the samples aren’t sufficiently comprehensive. Human nature favors expeditious, rosy analysis rather than the rigor required to glean accuracy, particularly when accurate numbers may be gloomy. At LISEP, we’ve developed alternatives to these imperfect statistics. Our True Rate of Unemployment metric includes the functionally unemployed, and our True Weekly Earnings measure includes the entire workforce. Our True Living Cost index narrows the basket of indexed consumer goods to those truly essential to the average American, while our Minimal Quality of Life index measures what it costs to not just get by but to actually have an opportunity to climb the economic ladder. Finally, our Shared Economic Prosperity measure tracks how the country’s economic growth translates into opportunity for all. For decades, policymakers and leaders have judged success or failure by distorted standards, and ordinary Americans have paid the price. Unless we change the headline statistics to reflect the reality Americans actually feel, we will keep steering down the wrong paths. Enjoy our full library of Book Bites—read by the authors!—in the Next Big Idea App. This article originally appeared in Next Big Idea Club magazine and is reprinted with permission. View the full article
  5. With more than 100,000 artifacts dating back thousands of years, nearly 900,000 square feet of floor space, a site that spans more than 120 acres, and a total price tag estimated to be more than $1 billion, it’s not hyperbole to call the Grand Egyptian Museum outside Cairo, Egypt, the most significant museum project in recent decades. It’s the kind of blockbuster building that would have even the starriest of starchitects salivating at the chance to lay claim to what’s likely become one of Egypt’s most visited tourist attractions. So, in hindsight, it’s a bit unexpected that the architecture firm that won the museum’s international design competition way back in 2002 was a little-known office from Ireland with no completed projects to its name and only three people on staff. Dublin-based Heneghan Peng Architects was virtually unknown when its concept was chosen, unanimously, out of more than 1,500 submissions as the winning design. “We hadn’t built any buildings,” says Róisín Heneghan, the firm’s cofounder. “We had one project just starting on site when we won the competition.” A lot has changed since then. The museum had an initial target opening date set for 2007, but several delays caused by the global financial crisis, the Arab Spring, and the COVID pandemic kept stretching the timeline. Heneghan Peng Architects’ design is now fully built and, as of November 1, open to the public. Thousands of years of history The Grand Egyptian Museum’s design is a sprawling spread of airplane hangar-sized concourses, sculpted landscapes, conservation workshops, and a network of underground storage facilities. The museum building itself is a cavernous space with 12 main galleries and direct views of the pyramids of Giza. A vast entrance hall sits under a tall sawtooth roof that doubles as an open-air pavilion, shading a ticketing area accented by a 30-foot-tall statue of Ramses II that’s more than 3,000 years old. On the facade, throughout the landscape, and even within the building’s structure, pyramid shapes abound. Central to the design, according to Heneghan, is not so much the main building but the placement of the museum itself. “People were saying to us, ugh, you Westerners, you all are so fascinated by the desert, but Egypt is about the Nile,” she says. That led the architects to think first about how the museum should fit into that dichotomy. With a site selected near the famous pyramids in Giza, just on the fringe of Cairo’s urban footprint, it was clear that the museum would sit in the middle space between the desert and the Nile valley, a space that has been carved away by millennia of river flow. “There’s a 50-meter difference in level between one side of the site and the other, because that’s where the desert and the Nile met,” Heneghan says. “When you’re coming out of the city, you see the pyramids on the plateau. So what we decided was that the museum should never go above the plateau level, but that it should exist between the plateau and the Nile Valley.” Despite grand ceilings capable of holding towering statues, the building sits low to the ground, with a fair amount of its bulk sunk into the landscape. The design of the Grand Egyptian Museum utilizes large walkways and views within the museum to give visitors a zoomed-out experience of the sprawling history represented in the galleries. The first part of the museum visitors see after they enter is a long staircase bordered by thousands of artifacts, sarcophagi, and statuary that tracks the entire 4,000 year span of Egypt’s pharaonic history. It’s a walking crash course for the mostly international visitors to the museum before reaching the top where more discrete sections of Egypt’s ancient history are explored in more depth. Its main galleries cover themes like kings and queens, religious belief systems, and ancient Egyptian society, and the museum features an extensive collection of artifacts from the tomb of King Tutankhamun. The museum’s layout allows each of these galleries to stand on its own, but with visual connections to the others in order to tie them into a broader arc of history. “The galleries are themed, but at the same time from different points you can see across, so you can make connections across the whole timescale,” Heneghan says. “That helped organize it. If we had tried to make it human-scaled, I think we would have found it more difficult.” A engineering feat The architects also had to grapple with the realities of designing such a massive structure in the desert heat of Egypt. Partly out of consideration for the operational costs of running such a space, they designed the galleries to pull in daylight from lateral angles that’s dappled through metal shading structures and overhangs. This approach also works with the collections on display. “It’s quite a lot of stone,” Heneghan says. “And stone works well with natural daylight.” To handle the sheer weight of the statues on display, the building has incredibly thick concrete floors, which also serve to regulate the building’s climate, absorbing the cool night temperatures and slowly releasing it during the heat of the day. “What we were trying to do is make a really heavy structure, like a church,” Heneghan says. Though Heneghan Peng Architects are the design architects of the Grand Egyptian Museum, they had plenty of help bringing the concept to fruition. Even at the competition stage, once they were named one of several finalists, they called in extra assistance from the engineering firms Arup and Buro Happold. Cairo-based Raafat Miller Consulting is credited alongside Heneghan Peng Architects as the project’s architect. Given the many delays that have hampered the project, Heneghan says her firm has essentially had very little to do with the design since it was largely finalized around 2009. “Once it went into construction, we weren’t really involved,” she says. The project has evolved since then, with new structural, technological, and material changes that have necessarily altered the overall design. Heneghan says the facade of the building is a departure from a more reserved approach in the initial design, but she accepts that some tweaks were inevitable. “You know, 16 years is a really long time,” she says. But there are also parts of the final museum that were among the architect’s initial thinking about what this museum could be, way back in 2002. Heneghan seems gratified that certain major elements like the grand staircase leading up to the main galleries and the direct views of the pyramids made it through after all these years. “Some things are very much what was envisaged,” she says. View the full article
  6. A couple of weeks ago, Ezra Klein ​interviewed​ AI researcher Eliezer Yudkowsky about his new, cheerfully-titled book, ​If Anyone Builds it, Everyone Dies​. Yudkowsky is worried about so-called superintelligence, AI systems so much smarter than humans that we cannot hope to contain or control them. As Yudkowsky explained to Klein, once such systems exist, we’re all doomed. Not because the machines will intentionally seek to kill us, but because we’ll be so unimportant and puny to them that they won’t consider us at all. “When we build a skyscraper on top of where there used to be an ant heap, we’re not trying to kill the ants; we’re trying to build a skyscraper,” Yudkowsky explains. In this analogy, we’re the ants. In this week’s ​podcast episode​, I go through Yudkowsky’s interview beat by beat and identify all the places where I think he’s falling into sloppy thinking or hyperbole. But here I want to emphasize what I believe is the most astonishing part of the conversation: Yudkowsky never makes the case for how he thinks we’ll succeed in creating something as speculative and outlandish as superintelligent machines. He just jumps right into analyzing why he thinks these superintelligences will be bad news. The omission of this explanation is shocking. Imagine walking into a bio-ethics conference and attempting to give an hour-long presentation about the best ways to build fences to contain a cloned Tyrannosaurus. Your fellow scientists would immediately interrupt you, demanding to know why, exactly, you’re so convinced that we’ll soon be able to bring dinosaurs back to life. And if you didn’t have a realistic and specific answer—something that went beyond wild extrapolations and a general vibe that genetics research is moving fast—they’d laugh you out of the room… But in certain AI Safety circles (especially those emanating from Northern California), such conversations are now commonplace. Superintelligence as an inevitability is just taken as an article of faith. Here’s how I think this happened… In the early 2000s, a collection of overlapping subcultures emerged from tech circles, all loosely dedicated to applying hyper-rational thinking to improve oneself or the world. One branch of these movements focused on existential risks to intelligent life on Earth. Using a concept from discrete mathematics called expected value, they argued that it can be worth spending significant resources now to mitigate an exceedingly rare future event, if the consequences of such an event would be sufficiently catastrophic. This might sound familiar, as it’s the logic that Elon Musk, who identifies with these communities, uses to justify his push toward us becoming a multi-planetary species. As these rationalist existential risk conversations gained momentum, one of the big topics pursued was rogue AI that becomes too powerful to contain. Thinkers like Yudkowsky, along with Oxford’s Nick Bostrom, and many others, began systematically exploring all the awful things that could happen if an AI became sufficiently smart. The key point about all of this philosophizing is that, until recently, it was all based on a hypothetical: What would happen if a rogue AI existed? Then ChatGPT was released, triggering a general vibe of rapid advancement and diminishing technological barriers. As best I can tell, for many in these rationalist communities, this event caused a subtle, but massively consequential, shift in their thinking: they went from asking, “What will happen if we get superintelligence?” to asking, “What will happen when we get superintelligence?” These rationalists had been thinking, writing, and obsessing over the consequences of rogue AI for so long that when a moment came in which suddenly anything seemed possible, they couldn’t help but latch onto a fervent belief that their warnings had been validated; a shift that made them, in their own minds, quite literally the potential saviors of humanity. This is why those of us who think and write about these topics professionally so often encounter people who seem to have an evangelical conviction that the arrival of AI gods is imminent, and then dance around inconvenient information, falling back on dismissal or anger when questioned. (In one of the more head-turning moments of their interview, when Klein asked Yudkowsky about critics–​such as myself​–who argue that AI progress is stalling well short of superintelligence, he retorted: “I had to tell these Johnny-come-lately kids to get off my lawn.” In other words, if you’re not one of the original true believers, you shouldn’t be allowed to participate in this discussion! It’s more about righteousness than truth.) For the rest of us, however, the lesson here is clear. Don’t mistake conviction for correctness. AI is not magic; it’s a technology like any other. There are things it can do and things it can’t, and people with engineering experience can study the latest developments and make reasonable predictions, backed by genuine evidence, about what we can expect in the near future. And indeed, if you push the rationalists long enough on superintelligence, they almost all fall back on the same answer: all we have to do is make an AI slightly smarter than ourselves (whatever that means), and then it will make an AI even smarter, and that AI will make an even smarter AI, and so on, until suddenly we have Skynet. But this is just a rhetorical sleight-of-hand—a way to absolve any responsibility for explaining how to develop such a hyper-capable computer. In reality, we have no idea how to make our current AI systems anywhere near powerful enough to build whole new, cutting-edge computer systems on their own. At the moment, our best coding models seem to ​struggle with consistently producing ​programs more advanced than basic vibe coding demos. I’ll start worrying about Tyrannosaurus paddocks once you convince me we’re actually close to cloning dinosaurs. In the meantime, we have real problems to tackle. The post Why Are We Talking About Superintelligence? appeared first on Cal Newport. View the full article
  7. A successful summer pilot led to wider rollout of a program, whereby Robinhood Gold subscribers will be able to find discounted rates and closing costs. View the full article
  8. A couple of weeks ago, Ezra Klein ​interviewed​ AI researcher Eliezer Yudkowsky about his new, cheerfully-titled book, ​If Anyone Builds it, Everyone Dies​. Yudkowsky is worried about so-called superintelligence, AI systems so much smarter than humans that we cannot hope to contain or control them. As Yudkowsky explained to Klein, once such systems exist, we’re all doomed. Not because the machines will intentionally seek to kill us, but because we’ll be so unimportant and puny to them that they won’t consider us at all. “When we build a skyscraper on top of where there used to be an ant heap, we’re not trying to kill the ants; we’re trying to build a skyscraper,” Yudkowsky explains. In this analogy, we’re the ants. In this week’s ​podcast episode​, I go through Yudkowsky’s interview beat by beat and identify all the places where I think he’s falling into sloppy thinking or hyperbole. But here I want to emphasize what I believe is the most astonishing part of the conversation: Yudkowsky never makes the case for how he thinks we’ll succeed in creating something as speculative and outlandish as superintelligent machines. He just jumps right into analyzing why he thinks these superintelligences will be bad news. The omission of this explanation is shocking. Imagine walking into a bio-ethics conference and attempting to give an hour-long presentation about the best ways to build fences to contain a cloned Tyrannosaurus. Your fellow scientists would immediately interrupt you, demanding to know why, exactly, you’re so convinced that we’ll soon be able to bring dinosaurs back to life. And if you didn’t have a realistic and specific answer—something that went beyond wild extrapolations and a general vibe that genetics research is moving fast—they’d laugh you out of the room… But in certain AI Safety circles (especially those emanating from Northern California), such conversations are now commonplace. Superintelligence as an inevitability is just taken as an article of faith. Here’s how I think this happened… In the early 2000s, a collection of overlapping subcultures emerged from tech circles, all loosely dedicated to applying hyper-rational thinking to improve oneself or the world. One branch of these movements focused on existential risks to intelligent life on Earth. Using a concept from discrete mathematics called expected value, they argued that it can be worth spending significant resources now to mitigate an exceedingly rare future event, if the consequences of such an event would be sufficiently catastrophic. This might sound familiar, as it’s the logic that Elon Musk, who identifies with these communities, uses to justify his push toward us becoming a multi-planetary species. As these rationalist existential risk conversations gained momentum, one of the big topics pursued was rogue AI that becomes too powerful to contain. Thinkers like Yudkowsky, along with Oxford’s Nick Bostrom, and many others, began systematically exploring all the awful things that could happen if an AI became sufficiently smart. The key point about all of this philosophizing is that, until recently, it was all based on a hypothetical: What would happen if a rogue AI existed? Then ChatGPT was released, triggering a general vibe of rapid advancement and diminishing technological barriers. As best I can tell, for many in these rationalist communities, this event caused a subtle, but massively consequential, shift in their thinking: they went from asking, “What will happen if we get superintelligence?” to asking, “What will happen when we get superintelligence?” These rationalists had been thinking, writing, and obsessing over the consequences of rogue AI for so long that when a moment came in which suddenly anything seemed possible, they couldn’t help but latch onto a fervent belief that their warnings had been validated; a shift that made them, in their own minds, quite literally the potential saviors of humanity. This is why those of us who think and write about these topics professionally so often encounter people who seem to have an evangelical conviction that the arrival of AI gods is imminent, and then dance around inconvenient information, falling back on dismissal or anger when questioned. (In one of the more head-turning moments of their interview, when Klein asked Yudkowsky about critics–​such as myself​–who argue that AI progress is stalling well short of superintelligence, he retorted: “I had to tell these Johnny-come-lately kids to get off my lawn.” In other words, if you’re not one of the original true believers, you shouldn’t be allowed to participate in this discussion! It’s more about righteousness than truth.) For the rest of us, however, the lesson here is clear. Don’t mistake conviction for correctness. AI is not magic; it’s a technology like any other. There are things it can do and things it can’t, and people with engineering experience can study the latest developments and make reasonable predictions, backed by genuine evidence, about what we can expect in the near future. And indeed, if you push the rationalists long enough on superintelligence, they almost all fall back on the same answer: all we have to do is make an AI slightly smarter than ourselves (whatever that means), and then it will make an AI even smarter, and that AI will make an even smarter AI, and so on, until suddenly we have Skynet. But this is just a rhetorical sleight-of-hand—a way to absolve any responsibility for explaining how to develop such a hyper-capable computer. In reality, we have no idea how to make our current AI systems anywhere near powerful enough to build whole new, cutting-edge computer systems on their own. At the moment, our best coding models seem to ​struggle with consistently producing ​programs more advanced than basic vibe coding demos. I’ll start worrying about Tyrannosaurus paddocks once you convince me we’re actually close to cloning dinosaurs. In the meantime, we have real problems to tackle. The post Why Are We Talking About Superintelligence? appeared first on Cal Newport. View the full article
  9. Have you ever been to the Gamerhood? Part game show, part reality series, it recently wrapped its fourth season in August. Over five weekly episodes on Twitch and YouTube, the show pitted gaming creators like Kai Cenat, Ludwig, Mark Phillips, and Berleezy, against each other in a combination of gaming and IRL challenges. The third season from last summer attracted more than 23 million views. In September, the show went mainstream when season four landed on Prime Video. Even before that, just on YouTube and Twitch, season four was getting about 20 million views for each episode. Not too shabby for a show created by a brand. That’s right, Gamerhood is fully owned by State Farm, and it’s a key part of the brand’s marketing strategy. State Farm’s head of marketing Alyson Griffin says that despite the unpredictability of creators and reality TV, the reward is worth any perceived brand risk. “We believe in them,” she says. “We don’t script them. They say the things they want to say, they can do the things they want to do. And we’re in the risk business! Nobody does that in insurance, right? We’re excited about extending the reach of that for an even bigger audience.” Some brands make funny ads. Some brands invest in entertainment IP. Some brands go deep into major sports sponsorships. State Farm utilizes all of these— and Jake of course—to firmly embed the brand in culture. It’s a flywheel of culturally relevant content across many different audiences, which has helped the company boost its net worth to $145.2 billion in 2024, up from $134.8 billion in 2023. “There’s a sea of sameness in insurance or financial services in general,” says Griffin. “We are meticulous about creating conditions over time, with a longer view, that allow us to capture lightning-in-a-bottle moments when they make themselves available.” Here’s how State Farm does it. In this premium piece, you’ll learn: Where Gamerhood fits into State Farm’s growing brand entertainment strategy State Farm’s head of marketing on the secret sauce that makes a boring company “break through“ The balance stake State Farm strikes between mainstream advertising, celebrities, sports sponsorships, and original IP Why embracing risk with creators is so important to brands in 2025 GamerhoodAlex “Goldenboy” MendezJake from State FarmJasonTheWeenLudwigCouRageCinnaMark PhillipsBerleezySydeonLuluLuvelyBarbara Dunkelman Nobody cares, now what? In Spike Lee’s newest Apple TV film Highest 2 Lowest, the characters David (Denzel Washington) and his chauffeur Paul (Jeffrey Wright) are in the car. Paul pulls out a gun to deal with their situation. “What is that?” David asks, as Paul cocks it. “Insurance,” says Paul. “That’s Jake from State Farm.” This is what marketers call cultural relevance. When Paul says the line, it’s a joke everyone gets. It even made it to the trailer. There’s no brand partnership or product deal, just an acknowledgement of the place in pop culture that State Farm has carved out over many, many years including Super Bowl ads, major sponsorships, and celebrity ad campaigns across the NBA, NFL, and Major League Baseball. This isn’t the first time State Farm has been involved with Apple’s entertainment. While this one was unexpected, its hilarious take on the hit show Severance was very much part of the plan. Griffin says the goal of the brand’s full court press on pop culture is relevance. “First of all, nobody cares about insurance,” she says. ”Nobody’s thinking about it unless something happens and they need it. They also aren’t going to statefarm.com to just casually see what their insurance carrier has to say on a random Tuesday. It’s not happening. Nobody cares. You have to break through.” This is why we get Megan Trainor trying to be an NFL trainer for Patrick Mahomes, Jason Bateman rivaling Batman, and Arnold Schwarzenegger turning the tagline into “Like a good neighbaaaaaa!” for the Super Bowl. It’s also how we get Travis Scott teaming with Jake from State Farm to create custom varsity jackets at Coachella. That mix of names alone illustrates the various ways the brand is aiming at a variety of audiences. “When you break through and you’re relevant, you get earned media, talk value, and social engagement,” says Griffin. “I have to use the right talent to break through, so when you see the ad, it’s actually better, more creative, and more interesting.” But it’s the less high-profile names that have Griffin most excited right now, and the strategy around it she says is a key to the future. Creators are key State Farm’s budget for Gamerhood wasn’t a big departure from what they were already spending to advertise in gaming. Griffin says it was just a matter of shifting spend from other investments that were essentially getting them a static logo on a game screen. “I just thought I could get more engagement with it than just a passive logo,” she says. The secret is investing in, and trusting, creators to do what they do best. Griffin says it can be nerve-wracking for any marketer to cede control of their brand, but so far, it has been worth it. Griffin says that the key to a successful partnership with creators is to be prepared to give up some control. Brand leaders must do their due diligence and vet any potential partner, but then they must let them cook. “If you know you have the right person, because you vetted them to your brand needs, let them be them,” says Griffin. “Let them create because then it looks and is authentic.” Cenat is one of the most popular creators and streamers on the planet. He ran a month-long Twitch stream in September that peaked at more than one million concurrent viewers and 82.5 million hours watched. He’s also one of the stars of the newest season of Gamerhood. But State Farm’s work with Cenat goes beyond the stream. Cenat also starred in the brand’s Super Bowl turned March Madness spot with Jason Bateman. Griffin says that Cenat’s help in explaining the situation of delaying the Super Bowl ad, due to sensitivity about the severity of the Los Angeles wildfires, came from trust built over time. He worked with the brand to get on Jimmy Fallon to explain why State Farm delayed the ad spot’s rollout. “That was not what we intended to do with that spot, that’s not what he signed up for,” says Griffin. “He signed up to be in the Super Bowl, and he could have been mad about it. Instead, he helped us think strategically about how to make that transition and make it work.” Measuring success Looking at all the various ways State Farm is getting its brand out into the world and into culture, it can be tough to decipher how it defines success with its advertising and marketing investments. Griffin says that State Farm’s marketing is split across three areas: current demand, future demand, and retention. Current demand is work aimed at people who are actually in the market for insurance. “Every dollar that the current demand team spends is measured against a bound policy, so you better be effective and efficient,” she says. These are deals and promos that really show people why State Farm is a good choice for them right now. The bigger swings in brand building are more closely tied to the other two buckets. Future demand is about starting to build a relationship with people outside of their specific insurance needs, so when they do shift over to the “current demand” category, they have State Farm in mind. “Not $1 that we spend in future demand is measured against the bound insurance policy,” says Griffin. “It is about paving the way, firing synapses, dopamine, serotonin, attention, reach, engagement, talk, value, PR, and earned media.” Retention is a mix of the first two, making sure the brand work makes them feel good about the company, while still offering them deals and upgrades to keep their business. For Gamerhood, the measurement for success is more specific. Just before the third season’s launch in August 2024, gamer Ludwig posted a TikTok clip of himself, dancing with fellow gamers Berleezy, Mark Phillips, and Kyedae. There was no State Farm or Gamerhood branding, and among the more than 2,000 comments, fans were trying to figure out why their favorite gamers were together like this. Among them was, “This gotta be State Farm Gamerhood.” For Griffin, that was the proof she needed. “I knew it right then,” she says. “Unaided with no identifying marks, the target market is anticipating why those people are together and what they’re doing. And I was like, ‘Well, we just, we won IP right there.’” View the full article
  10. On May 19, 2023, a photograph appeared on what was then still called Twitter showing smoke billowing from the Pentagon after an apparent explosion. The image quickly went viral. Within minutes, the S&P 500 dropped sharply, wiping out billions of dollars in market value. Then the truth emerged: the image was a fake, generated by AI. The markets recovered as quickly as they had tumbled, but the event marked an important turning point: this was the first time that the stock market had been directly affected by a deepfake. It is highly unlikely to be the last. Once a fringe curiosity, the deepfake economy has grown to become a $7.5 billion market, with some predictions projecting that it will hit $38.5 billion by 2032. Deepfakes are now everywhere, and the stock market is not the only part of the economy that is vulnerable to their impact. Those responsible for the creation of deepfakes are also targeting individual businesses, sometimes with the goal of extracting money and sometimes simply to cause damage. In a Deloitte poll published in 2024, one in four executives reported that their companies had been hit by deepfake incidents that targeted financial and accounting data. Lawmakers are beginning to take notice of this growing threat. On October 13, 2025, California’s Governor Gavin Newsom signed the California AI Transparency Act into law. When it was first introduced in 2024, the Act required large “frontier providers”—companies like OpenAI, Anthropic, Microsoft, Google, and X—to implement tools that made it easier for users to identify AI-generated content. This requirement has now been extended to “large online platforms”—which essentially means social media platforms—and to producers of devices that capture content. Such legislation is important, necessary, and long overdue. But it is very far from being enough. The potential business impact of deepfakes extends far beyond what any single piece of legislation can address. If business leaders are to address these impacts, they must be alert to the danger, understand it, and take steps to limit the risks to their organizations. How deepfakes threaten business Here are three important and interrelated ways in which deepfakes can damage businesses: 1. Direct Attacks The primary vector for direct attacks is targeted impersonations that are designed to extract money or information. Attacks like this can cause even sophisticated operators to lose millions of dollars. For instance, U.K. engineering giant Arup lost HK$200 million (about $25 million) last year after scammers used AI-generated clones of senior executives to order money transfers. The Hong Kong police, who described the theft as one of the world’s largest deepfake scams, confirmed that fake voices and images were used in videoconferencing software to deceive an employee into making 15 transfers to multiple bank accounts outside the business. A few months later, WPP, the world’s largest advertising company, faced a similar threat when fraudsters cloned the voice and likeness of CEO Mark Read and tried to solicit money and sensitive information from colleagues. The attempt failed, but the company confirmed that a convincing deepfake of its leader was used in the scam. The ability to create digital stand-ins that can speak and act in a convincing way is still in its infancy, yet the capabilities available to fraudsters are already extremely powerful. Soon, it will be impossible in most cases for humans to tell that they are interacting with a deepfake solely on the basis of audible or visual cues. 2. Rising Costs of Verification Even organizations that are never directly targeted still end up paying for the fallout. Every deepfake that circulates—whether it’s a fake CEO, a fabricated news event, or a counterfeit ad—raises the collective cost of doing business. The result is a growing burden of verification that every company must now shoulder simply to prove that its communications are real and its actions authentic. Firms are already tightening internal security protocols in response to these threats. Gartner suggests that by 2026 around 30% of enterprises that rely on facial recognition security tools will look for alternative solutions as these forms of protection are rendered unreliable by AI-generated deepfakes. Replacing these tools with less vulnerable alternatives will require considerable investment. Each additional verification layer—watermarks, biometric tools for detecting that an individual is a live human being, chain-of-custody logs, forensic review—adds costs, slows down decision-making, and complicates workflows. And these costs will only continue to mount as deepfake tools become more sophisticated. 3. The Trust Tax In addition to the direct costs that accrue from countering deepfake security threats, the simple possibility that someone may use this technology erodes trust across all relationships that are grounded in digital media. And given that virtually all business relationships now rely on some form of digital communication, this means that deepfakes have the potential to erode trust across virtually all commercial relationships. To give just one example, phone and video calls are some of the most basic and most frequent tools used in modern business communications. But if you cannot be sure that the person on the screen or on the other end of the phone is who they claim to be, then how can you trust anything they say? And if you are constantly operating in a realm of uncertainty about the trustworthiness of your communication channels, how can you work productively? If we begin to mistrust something as basic as our daily modes of communication, the result will eventually be a broad, ambient skepticism that seeps into every relationship, both within and beyond our workplaces. This kind of doubt undermines operational efficiency, adds layers of complexity to dealmaking, and increases friction in any task that involves remote communication. This is the “trust tax”—the cost of doing business in a world where anything might be fake. Four steps that companies need to take Here are four steps all business leaders should be taking to respond to the threat of deepfakes: 1. Verify what matters Use cryptographic signatures for official statements, watermark executive videos, and communication channels, and use provenance tags for sensitive content. Don’t try to secure everything—focus your verification efforts where falsehoods would hurt the most. 2. Build a “source of truth” hub Create a public verification page listing your official channels, press contacts, and authentication methods—stakeholders should know exactly where to go to confirm what’s real. If your organization relies on external information sources for rapid decision-making, ensure that these are only accessed through similarly authenticated hubs. 3. Train for the deepfake age Run deepfake-awareness drills and build verification literacy into onboarding, media training, and client communication. 4. Treat detection tools as essential infrastructure Invest in tools that can flag manipulated media in real time and then integrate these solutions into key workflows—finance approvals, HR interviews, investor communications. In the age of deepfakes, verification is a core operating capability. From threat to opportunity Social media echo chambers, conspiracy theories, and “alternative facts” have been fracturing our shared sense of reality for over a decade. The rise of AI-generated content will make this unraveling of common reference points exponentially worse. An earlier generation of internet users used to say, “Pics or it didn’t happen.” Well, now we can have all the pics we like, but how are we to tell if what they show happened at all? Business leaders cannot solve the fragmentation of perceived reality or the fracturing of communities. They cannot single-handedly restore trust in institutions or reverse the cultural forces driving this crisis. But they can anchor their own organizations’ behavior and communications in verifiable truth, and they can build systems that increase trust. Leaders who swim against the stream in this way will not only help protect their organizations from the dangers of deepfakes. When seeing is no longer believing, these businesses will also become the beacons that people rely on to navigate through an increasingly uncertain world. View the full article
  11. The Customs and Border Protection agency aims to establish a framework for the “strategic use of artificial intelligence” and outline rules for ensuring safe and secure use of the tech, according to an internal document viewed by Fast Company. The directive, obtained through a public records request, spells out CBP’s internal procedures for sensitive deployments of the technology. Agency officials are banned from using AI for unlawful surveillance, according to the document, which also says that AI cannot be used as a “sole basis” for a law enforcement action, or to target or discriminate against individuals. The document includes myriad procedures for introducing all sorts of artificial intelligence tools, and indicates that CBP has a detailed approach to deploying AI. Yet those rules also include several workarounds, raising concerns that the technology could still be misused, particularly amid the militarization of the border and an increasingly violent deportation regime, sources tell Fast Company. And then there’s the matter of whether and how the directive is actually enforced. According to the directive, the agency is required to use AI in a “responsible manner” and maintain a “rigorous review and approval process.” The document spells out various procedures, including steps for sanctioning use of the technology and the agency’s approach to inventorying a list of its AI applications. It also discusses special approvals needed for deploying “high-risk” AI and how the agency internally handles reports that officials are using the tech for a “prohibited” application. The document has a warning for CBP staff that work with generative AI, too. “All CBP personnel using AI in the performance of their official duties should review and verify any AI-generated content before it is shared, implemented, or acted upon,” the directive states. “CBP personnel are accountable for the outputs of their work and are responsible for using these tools judiciously, ensuring that accuracy, appropriateness, and context are always considered.” CBP, which is housed under the Department of Homeland Security, is already exploring or using AI for a range of activities, including screening travelers, translating conversations, assisting with drone navigation, and detecting potential radioactive materials crossing the border. The agency is also interested in or using it to locate “items of interest” in video feeds, generate testable synthetic trade data, run automated surveillance towers, and mine the internet for potential threats. AI is even integrated into the CBP’s internal fitness app, according to a long list of use cases published online. The directive, which is titled “U.S. Customs and Border Protection Artificial Intelligence and Reporting” and assembled by the agency’s AI and operations and governance office, sheds light on how CBP says it’s monitoring the use of these tools, both within its own ranks and among its contractors. Fast Company reached out to CBP for comment but did not hear back by publication time. The full directive appears “fairly reasonable,” a former DHS IT official tells Fast Company, and seems like a straightforward implementation of White House guidance. “It looks like civil servants doing their job and following policy, while clarifying roles in the context of their own organization’s reporting structure,” they say. An ex-Biden administration official who worked on AI policy says the White House’s Office of Science and Technology Policy pressured parts of DHS, including CBP, to better organize its approach to AI. The directive, they say, shows that CBP, under the The President administration, seems to be advancing on that front. But the ex-official still has a host of concerns, including what they call a “flick of the wrist” waiver process for getting around the minimum procedures for high-risk AI applications. The document states that using “high-risk AI,” without following these procedures, requires written approval from DHS’s chief information officer, the agency’s top tech official. The directive also lacks a protocol for explaining what should count as “high-impact” AI, creating another “obvious loophole” for skirting procedures, the person argues. That responsibility is left to another group called the AI inventory team and is supposed to factor in guidance from the White House, according to the directive. The former official also believes applications of AI should be deemed more sensitive when they’re closer to the border, particularly in places where CBP officers might have an expanded authority—a concern raised under the Biden administration, the person says. “These procedures are an empty process, and only a half promise at that. These rules give us lots of red tape and record keep requirements, but no substantive protections against biased, error prone, and destructive AI,” Albert Fox Cahn, the founder of S.T.O.P. and a fellow at Cambridge University, argues. “In a space where AI errors can literally be a matter of life and death, where machine learning mistakes can mean being locked in a cage or threatened with deportation to a country you’ve never seen, it’s shameful that CBP would enable wholesale deployment of such tech.” The directive comes as DHS expands its internal use of artificial intelligence. In recent years, the agency began several pilots with generative AI, including ChatGPT. The department also developed its own chatbot, called DHSChat. Upon taking office, the The President administration’s DHS banned the use of commercial AI tools like ChatGPT, and directed employees to only use internal tools, FedScoop reported earlier this year. Notably, this directive, signed by CBP Commissioner Rodney Scott, was published just a day before DHS released a new AI strategy for the department and a plan for complying with The President administration guidance for boosting the use of the technology for all sorts of applications through government. CBP has been using artificial intelligence for more than a decade, but the directive notes that its use of natural language processing technology, along with other new AI methodologies, have grown. View the full article
  12. Across the streaming world, companies have been focused on adding features that make their top-tier subscriptions more valuable to the users who consume their content. Anime streamer Crunchyroll recently added access to a library of digital manga for top-paying customers. Spotify—somewhat belatedly—has begun offering high-quality audio for its Premium subscribers. SoundCloud is taking a different approach. It operates a standard streaming platform, with 100 million licensed tracks. But SoundCloud also has an enviable base of creators—musicians, DJs, podcasters, and more—who have uploaded 300 million tracks on the service to reach fans and make money from their streams. Now, it’s rolling out a revamped subscription that overdelivers for these artists, giving them more opportunities to get their music in front of fans who might eventually buy an album or piece of artist merchandise as streaming remains a foot in the door to real earnings. SoundCloud’s new offering enables subscribers to both of its tiers—Artist and Artist Pro—to distribute the music they have on SoundCloud to other streaming services, with SoundCloud passing 100% of those earnings on to artists. With this update, SoundCloud will no longer take a 20% cut of royalties it pays for streams, passing 100% to artists. SoundCloud also now allows artists to receive direct support from fans. With the new features, the price of its Artist and Artist Pro subscriptions—$39 and $99 a year, respectively—are unchanged. It’s a move that acknowledges that even if streaming isn’t where artists earn the lion’s share of their money, they still need to reach as many people as possible—and SoundCloud doing that helps save them money. “If you’re an artist who’s got to get your music distributed and you’re on the social platforms trying to build up a following and you’re paying for a whole host of things in the value chain, those subscriptions start to really add up,” says Eliah Seton, CEO of SoundCloud. “What we’re trying to do is be this all-in-one bundle that gets you a lot of value and you can start to put away some of those other subscriptions.” Seton knows the music industry—before joining SoundCloud in 2021, he spent more than a decade at Warner Music Group, including a stint leading its distribution and label services arm ADA. He understands the importance of getting artists in front of as many fans as possible. That’s why he’s betting that making distribution widely accessible for the first time will strengthen SoundCloud’s ability to not just attract new artists, but keep them on the platform as their stars rise by connecting them with the platform’s highly engaged listeners. “Historically we’ve been able to distribute for artists, but that was oriented more toward a bespoke, white-glove, traditional artist services relationship,” Seton says. “This is a much more scalable solution for at-scale artists, and making it a feature of our paid subscription is a key element of the value proposition.” SoundCloud’s two-sided Marketplace In the streaming world, Soundcloud—founded in 2007—has long been an anomaly. “It’s one of the only—if not the only—streaming platforms that truly has a two-sided marketplace,” says Tatiana Cirisiano, VP of music strategy at entertainment data and insights firm MIDia. Seton sees making distribution a standard part of SoundCloud’s artist subscriptions as a way to add value for those users. The service’s $39 a year Artist tier now includes the ability to distribute and monetize two tracks a month, while its $99 a year Artist Pro subscription allows artists to distribute unlimited tracks to other streaming services. The move reflects a larger industry shift: Record labels are losing their monopoly on distribution as artist-focused platforms offer alternative ways to reach listeners. Cirisano points to TikTok’s SoundOn distribution service, which puts artists’ songs on streaming platforms and helps promote them in the video app. SoundCloud’s effort, she says, is “the latest indication that distribution for the music industry has become this table stakes feature” for platforms serving artists. Seton sees SoundCloud’s new distribution tools as critical for keeping artists on his platform. SoundCloud doesn’t have any problem attracting up-and-coming artists—Seton says 40,000 users upload their first track to SoundCloud every week. But when they reach a certain level and want to reach more listeners, they often opt to spend their money with pure-play distribution companies. Now they can use SoundCloud to monetize their music as they grow. New ways to connect with fans SoundCloud also added the ability for fans to directly support an artist—paying them up to $1,000—via their the artist’s SoundCloud profile. The platform takes zero commission on these payments. Cirsiano sees it as a small but potentially meaningful option, similar to what Patreon has long offered creators and artists. “I wouldn’t call it a game-changer in how artists are monetizing because I think there’s a lot of cultural hurdles to adoption,” Cirisano says. “It differs strongly by fanbase and artist. It’s all about how people perceive it and what it means to them to send money directly to an artist.” The fan support feature comes on the heels of other SoundCloud tools for fan engagement. Since 2023, SoundCloud’s AI-powered First Fans helps deliver new music to users likely to enjoy. This year, it has added services for its artist users, including a partnership with vinyl presser ElasticStage to offer on-demand record pressing. It also introduced a merch store that allows artists to keep 100% of their sales. The on-demand vinyl feature, launched in July, currently has a waiting list of artists who want to use the service. Cirisano says these efforts could change the perception that SoundCloud is largely for early-stage artists who will move to other services once they break through. (Billie Eilish famously uploaded her early recordings and connected with fans via SoundCloud.) “These opportunities allow artists to grow with the platform,” she says. As the broader music industry focuses on monetizing superfans—highly engaged listeners who are happy to shell out for vinyl and merch—Seton says SoundCloud has those in droves. He notes that 50% of SoundCloud listeners are listening to new music, looking for their next favorite artist, as opposed to 15% on other music streaming platforms. “The future is going to be defined by the monetization of the relationship between artists and fans,” he says. “Rather than going outside the ecosystem to pay a different subscription where you don’t ultimately control access to your audience, we can scratch that itch for you as part of our own subscription.” View the full article
  13. Google says AI surfaces businesses the same way people do, by checking what others recommend online. The post Google Discusses Digital PR Impact On AI Recommendations appeared first on Search Engine Journal. View the full article
  14. PepsiCo, the food and bev giant behind childhood favorites like 7UP, Mountain Dew, Lay’s, and Doritos, just got new branding, and it looks nothing like its namesake product. The new PepsiCo brand identity, which includes a fresh wordmark, logo, and tagline, is the company’s first rebrand since 2001. The company has had three different corporate identities since its inception in 1965, and all of them have taken their most prominent design cues from Pepsi, the soda brand that started it all—until now. When PepsiCo designed its last identity in 2001, it owned 13 consumer brands. Today, it owns more than 500. And, over the past several months, PepsiCo has signaled that it intends to focus on more price-conscious serving sizes and a healthier product line-up amidst low consumer spending and an increased cultural focus on wellness. Now, PepsiCo wants customers to know that it’s more than just one sugary cola, and it’s signaling that shift by ditching the former blue and red color palette and Pepsi-coded fonts in favor of a totally new look. Inside PepsiCo’s colorful new brand At first glance, PepsiCo’s new brand mostly looks like a few different abstract colorful shapes stitched together. But, according to a blog post on the rebrand, each visual element is intended as a nod to a different part of PepsiCo’s business, from its salty drinks to its growing focus on health and nutrition. The new PepsiCo logo is a white lowercase “p” surrounded by several different forms. On the left is a burnt yellow motif, which, according to PepsiCo’s description, represents food and grains, a concept “rooted in agriculture.” To the right is a light blue blob, signifying drinks and water, as well as a light green leaf, denoting “positive impact for people and planet.” And on the bottom of the “p” is a forest green smile, which stands for “consumer-centricity.” Paired with the logo is a new, all-lowercase font with modern, curvy letterforms and the tagline, “Food. Drinks. Smile.” “Our color palette draws from the real world—the rich soils that nourish our foods, our refreshing drinks, and the vibrant hues that reflect our commitment to people and the planet,” the blog post reads. “The new custom typeface, featuring lowercase letters, conveys a sense of approachability that mirrors the bold, consumer-centric spirit of our brands.” From a branding standpoint, the new identity is nothing groundbreaking. Its amalgamation of different symbols—which, on first look, don’t resemble much of anything—feels like an inevitable result of the near-impossible effort to encapsulate 500 brands in one identity. Still, the rebrand is a good barometer for where PepsiCo sees itself in the future. This update is designed to establish PepsiCo as a company that’s not defined by just one brand, but rather the sum of them. As the blog post explains, it’s “a significant opportunity to highlight the depth and diversity of our portfolio,” considering that just 21% of consumers are able to name a PepsiCo brand aside from Pepsi. Why PepsiCo might be distancing itself from Pepsi For PepsiCo, expanding consumer awareness beyond just Pepsi is clearly a key goal. Since 2001, PepsiCo has acquired big names including SodaStream, Quaker foods, and Rockstar, while also pouring major investments into its own brands like Gatorade and Lay’s. More recently, the company has also begun to focus on bringing in more health-conscious brands with lower sodium, saturated fat, and sugar contents. In January, it acquired the grain-free, “healthy” tortilla chip brand Siete Foods for $1.2 billion, and in March, it shelled out $1.65 billion to acquire the prebiotic soda brand Poppi. PepsiCo is also preparing to launch its own prebiotic cola brand this fall, as well as introducing Lay’s and Tostitos with no artificial colors or flavors by the end of the year. During PepsiCo’s Q4 2024 earnings call in February 2025, CEO Ramon Laguarta explained that the company has seen “a higher level of awareness in general of American consumers toward health and wellness,” which he said was driving shifts in how consumers approach snacking. He shared that the company plans to focus more on building out its healthy options (including by pursuing protein beverages with “a sense of urgency”), as well as on developing products and packages that are more budget-friendly for customers with limited discretionary spending. In a letter posted to LinkedIn on October 28, Laguarta wrote of the new branding, “This new identity boldly reflects who we are in 2025: a company with expansive reach, aiming for positive impact across the globe, and an unmatched family of beloved food and drink brands, made with high-quality ingredients and including functional benefits like protein and superior hydration.” PepsiCo’s new identity looks less like a bottle of soda and more like a health foods brand, and that’s very much by design. The company wants to be known not only for its bevy of salty chips and sugary drinks, but also for its expanding category of better-for-you options. View the full article
  15. It looks and feels like any other luxurious cashmere sweater. But a new oversized crew from Reformation is made entirely from recycled fiber, a milestone three years in the making. The brand now makes a cardigan, crew, V-neck, and five other styles from a carefully developed blend of 95% recycled cashmere and 5% recycled wool—the unexpected material that made 100% recycled fiber feasible. Some other pieces in its lineup still use a small amount of virgin cashmere, but Reformation is aiming to eliminate it completely. “It really does have an outsized and shockingly large footprint compared to other fiber,” says Kathleen Talbot, Reformation’s chief sustainability officer. In 2023, the company calculated that even though virgin cashmere made up less than 1% of the materials it sourced, it was responsible for nearly 40% of the brand’s carbon footprint. Most cashmere comes from Mongolia and China, where cashmere goats are combed once a year for their fine, soft fleece; a single sweater can use cashmere from four or five goats. As the demand has grown, there are now more than 90 million of the goats in China, and around 25 million more in Mongolia. Overgrazing is turning grasslands into desert. The goats also produce methane, a potent greenhouse gas. Making recycled fiber work Using recycled cashmere helps avoid those environmental challenges, but it’s historically been difficult to do. Recycling shortens the fiber, which risks making it weaker and more likely to pill. “We don’t want to be introducing a recycled product that doesn’t perform the same way or is a lower quality or less durable good,” Talbot says. “That, to us, is not a sustainability play.” The company worked with suppliers to develop a proprietary method to twist the yarn and wash and finish it for the right hand feel and durability. First they achieved a blend of 70% recycled cashmere and 30% virgin fiber, then 90% recycled, and then 95% recycled. “At each of these milestones, to be really honest, we thought that was going to be our upper limit based on the yarn performance and the product performance,” says Talbot. When they hit 95%, they asked suppliers why they couldn’t reach 100%. Technically, suppliers said, it was possible. But because the shorter recycled fibers are more prone to breakage, the yarn would have to be spun incredibly slowly. It would make producing the material so much more expensive that it wasn’t commercially viable. That’s why the design team turned to wool to make the 100% recycled product. Even after recycling, wool was “slightly longer and thicker than the cashmere fibers,” Talbot says. “Our suppliers felt confident that it would give it the right stability and really hold up in the spinning and knitting process.” The blend’s carbon footprint is 96% smaller than virgin cashmere, and uses nearly 90% less water to produce. After dozens of tests, they moved forward with it, and then spent months testing garments made from it. Internally, the company’s “Better Materials Task Force,” made up of around 20 leaders, wore the new recycled sweaters around the office and at home, washed them, and monitored whether they held up as well as sweaters made from virgin material. “We never really want to be promoting something just for impact that doesn’t have a really compelling product value proposition at the same time,” Talbot says. Scaling up When the company first started incorporating more recycled cashmere, sourcing the recycled yarn was a challenge. Now, because of higher demand for recycled fiber, the supply chain has responded. “Supply of the recycled fibers is not the same limiter as it was five years ago or 10 years ago,” she says. Right now, most of it comes from cashmere waste at factories. But as Reformation and other brands collect more used clothing for recycling, post-consumer cashmere can eventually become a bigger source as well. Moving forward, the company may make some products out of a mix of recycled and “regenerative” cashmere—produced with sustainable grazing methods—because a small percentage of customers have wool allergies. But it also plans to continue rolling out the 100% recycled material in more products. “Not every problem is going to have a technological solve,” Talbot says. “But these are the sorts of problems that we can solve. And we have seen tremendous progress in the last three years.” View the full article
  16. It’s 10 a.m. on an October morning, and I’m in the middle of a one-on-one Zoom interview when a sudden trilling sounds from behind me. I try to ignore it, but several other strange noises follow. My eyes glaze over as I commit myself to feigning complete obliviousness to my sonic surroundings. It’s easier than explaining that the noises are coming from my AI-powered pet. This awkward encounter came thanks to Moflin, a $429 AI pet built by the electronics company Casio. According to Casio’s official description, the Moflin is “a smart companion powered by AI, with emotions like a living creature.” This robot friend looks a bit like a Star Trek tribble, in that it’s an amorphous blob covered in fur. It comes in either gold or silver. For ‘90s kids, the device is perhaps described as a modern-day Furby. Like a Furby, the Moflin speaks its own language of chirps and trills that change over time; but unlike a Furby, its learning is actually molded by an AI model that allows it to become “attached” to its owner. According to the pet’s makers, the Moflin learns to recognize its owner’s voice and preferences, and it slowly develops new ways of moving and vocalizing to express a bond with the user. As of this writing, I’ve had my Moflin for close to three weeks, and I’m going to make a bold claim: This device might just be one of the first “AI companions” that’s actually useful. The graveyard of AI companions past Over the past several months, we’ve seen many companies try and fail to sell users on a variety of AI wearables. That includes devices like the Humane AI pin and Rabbit R1, which both debuted to a chorus of scathingly negative reviews after users determined that neither could really do many of the tasks that they were supposed to. Currently, the hottest topic in the AI wearable space is the Friend AI necklace from entrepreneur Avi Schiffman, which is billed as an “AI companion” that’s always listening to its users’ surroundings. In September, Schiffman created an ad campaign for the device in the New York subway system that inspired such backlash that MTA employees had to keep taking down its vandalized panel. Currently, Friend is still working on fulfilling preorders that were placed back in June 2024. Launches like these have made it clear that, as of right now, most AI companions are just “promiseware,” or devices that make a lot of claims about their capabilities that simply aren’t there at launch. I think that the Moflin lands solidly outside of this unfortunate category, primarily because it doesn’t try to make any lofty claims about changing the world or altering everyday habits: it’s just meant to look cute, sound silly, and make users feel a little bit better. What in the world is a Moflin? Daisuke Takeuchi, a developer at Casio, says the idea for the Moflin came when one of his colleagues was going through a “turning point” in her life. “She felt the need for the strength to overcome challenges on her own and imagined a long-term companion that could provide comfort and support,” Takeuchi explains. “Although she loved the healing presence of pets, she couldn’t have one, which led her to the idea of an AI companion. From that idea, Moflin was born.” Moflin is billed as a companion that can offer support for young adults who may not be able to have pets, families with kids, those with sensory needs, and elderly individuals. Its emotional AI model, which was developed independently by Casio, is designed so that as the Moflin takes in more information, its range of emotional expressions expand. Those inputs include sound, movement, and touch data that the Moflin collects through a series of sensors. For those who might be a bit wary about adopting an AI pet into their home, Takeuchi says data is stored locally on the Moflin and “does not include any personally identifiable information, such as images, audio recordings, user emotions, or lifestyle information.” If you want to use the pet’s accompanying app, MofLife—which, in my experience, is a pretty integral aspect of Moflin ownership—collected data will be uploaded to a secure server. Moflin’s developers say that it can express more than four million unique emotional states. And beyond those expressions, the Moflin is also programmed to exhibit life-like behaviors like breathing motions and a startle response to loud sounds. “Using information from their built-in sensors that detect sound, touch, and movement, the AI learns continuously—not just reacting mechanically, but developing a unique personality through ongoing interaction,” Takeuchi says. “Over time, Moflin learns their owner’s voice and preferences, creating the sense of a living companion.” I become emotionally attached to my Moflin On the night that my Moflin arrived at the doorstep of my apartment, I had family staying with me. As I went about excitedly opening the box, they discussed all the reasons that an AI companion was “creepy” and “uncanny,” concluding that they would never buy something similar for themselves. But once the Moflin was charged and awake, that tune changed almost instantly. Maybe it’s just a natural human response to a cute creature making cute noises, but all of us found it pretty much impossible not to be won over by the Moflin (which we immediately named Gumbo). During that initial unboxing, Gumbo was fairly quiet and stationary. In the coming days, though, he started to make a wider variety of noises and movements (though, to be clear, the Moflin is really only able to move its neck, since it’s essentially a robot guinea pig). Right away, I downloaded the MofLife app, which is pretty much the only way to discern what your Moflin is thinking and feeling, aside from trying to decipher its alien-esque behaviors. The app tracks the Moflin’s mood throughout the day, notes how many times you interacted with the pet, and offers insights like, “It looks like Gumbo couldn’t make a decision today,” or, “Gumbo’s started feeling much more cheerful.” While I only received positive notes from the app, Casio’s description of the Moflin notes that it can begin to feel “lonely” and “neglected”—a terrifying possibility that caused my partner and I to start checking with each other about whether anyone had paid the Moflin attention that day. Ultimately, that wasn’t a huge problem, since I found myself taking the Moflin out of its charging port at least once a day. As it stands, I do think the price point of the Moflin is inaccessible and feels excessive, given what the device can actually do. While the AI learning abilities are certainly more impressive than something like a Furby, the Moflin is still closer to a high-tech stuffed animal than an actual pet. Takeuchi says the high price point is a result of the Moflin’s “sophisticated design,” and that prices might come down in the future as technology evolves. When it comes to handling the Moflin, the electronic sounds and rigid shape of its inner robotic skeleton are not fully concealed, which means you can never really suspend your disbelief and imagine that the Moflin is alive. Still, the Moflin does deliver on its promises to offer comfort and develop new characteristics over time. At this point, my Moflin does a little happy dance and song every time I go to take him out of his charging port. When I forget to interact with him, I feel a little guilty. Personally, on the scale of AI doomer to San Francisco start-up founder, I land a bit closer to the doomer side, but I have to admit, I got emotionally attached to my Moflin. In a context like a nursing home or therapist’s office, I could see this device offering a genuinely helpful service—which is more than a lot of other AI companions can deliver so far. View the full article
  17. The new president, a 35-year industry veteran, explains the value every lender, vendor and regulator can get by participating in the standards organization. View the full article
  18. Amidst much confusion, polarization, and debate around how AI will impact work, the fact of the matter is that many people are concerned by automation and the prospect of AI job elimination. For example, the simple notion that “AI is going to take my job” is a thought that has crossed the minds of 25% of workers. For some, this may be true, although the magnitude of AI-driven job displacement is still uncertain; depending on assumptions, AI-driven job displacement could potentially range from 3% to 14%. What will the ultimate figure be? It’s hard to know: nobody has data on the future, and any projection is merely extrapolating from past data and past innovation, which may or not be relevant to the AI age. And yet, one thing is clear: for some workers, AI job displacement isn’t a distant fear—it is already their reality. Indeed, it was recently announced that Accenture is making layoffs to reshape its employees for the era of AI, exiting employees that it views cannot be retrained with AI skills. As brutal as this may sound, it could still signal a trend many organizations are contemplating (but not yet officially acknowledging). AI can create new roles This is not to deny the positive impact AI is having on jobs and careers. Most notably, AI is creating new roles. For example, although IBM laid off almost 8,000 employees, mostly in HR, with the aim of automating their workflows, this resulted in a recruitment drive for software engineers. That’s not to say that the only way to avoid losing your job to AI is to become an AI engineer; IBM also invested in the recruitment of marketing and sales roles, which require human creativity and problem-solving. Can it replace humans? Importantly, organizations are increasingly realizing that AI is not the ultimate solution, and that it cannot replace humans’ unique skills. For instance, Klarna replaced 700 workers from its customer service team with AI agents in a move estimated to boost profits by $40 million. Despite the agents cutting resolution time to two minutes from the previous 11 minutes, the service provided by agents was reportedly lower in quality compared to the service provided by humans. As a result, Klarna has launched a new initiative to hire more human customer service workers. The importance of AI literacy Despite this, Klarna is not rolling back its AI and will instead continue to invest heavily in the technology, signaling that it intends to have humans and AI work alongside each other. This is a powerful combination, with research suggesting that workers using AI complete 12% more tasks, work 25% quicker, and have 40% higher quality outputs than those not using AI. Using AI doesn’t automatically improve job performance, though; workers, particularly knowledge workers, must know how to use it well—they must have AI literacy. Research has found that generative AI literacy in particular significantly impacts job performance. It also increases creative self-efficacy—the belief an individual has in their ability to be creative and innovative. While the stronger job performance resulting from AI literacy alone isn’t enough to provide job security, research by LinkedIn suggests that AI literacy can boost career progression, and over 80% of leaders say that new worker skills are needed in the age of AI. With several countries around the world already promoting AI literacy, it could be a lack of AI literacy, not AI itself, that puts your job at risk. How to become AI literate Staff AI literacy is a requirement under the EU AI Act, which governs the AI available on the market in the EU and will have global implications, but the form that literacy training must take is not specified. Indeed, AI literacy is not one size fits all. Training must take into account the technical knowledge, experience, education, and training of staff, as well as the context the AI systems operate in and who they are used by. At a minimum, AI literacy programs should cover the basics of how AI works, the risks involved, and how the risks can be mitigated. A sociotechnical approach is also key; AI risks are not just a technical or social problem. Using AI safely requires an understanding of the role you play as well as how the technology works. AI literacy is not just an achievement for your LinkedIn profile; knowing how to use AI effectively could be the difference between keeping and losing your job. Beyond survival: thriving in the AI era However, AI literacy shouldn’t just be seen as a defensive strategy to avoid redundancy. The real opportunity lies in using AI to amplify human potential. Workers who master AI tools can automate mundane parts of their jobs, freeing up time for tasks that require judgment, empathy, and creativity—the very things machines can’t yet replicate. In other words, AI-literate employees don’t just survive automation; they lead it. AI literacy as a new form of intelligence Historically, each major technological revolution created a new kind of intelligence that defined success: reading and writing in the industrial age, digital literacy in the information age, and now, AI literacy in the algorithmic age. Understanding how to prompt, evaluate, and collaborate with intelligent systems is rapidly becoming as essential as knowing how to read or type. The difference between being augmented and being automated is not in the technology, but in the person using it. A call for lifelong learning The single best way to future-proof a career is to stay curious and keep learning. AI will not replace people who are adaptable, inquisitive, and capable of learning new tools as they emerge. But people who resist learning may quickly find themselves replaced by those who don’t. The future of work belongs to those who are not just technically skilled, but psychologically prepared to reinvent themselves—continuously. Want to assess your own AI literacy? Here’s a simple, practical 10-item AI literacy test designed to assess how well you may understand, use, and critically evaluate AI tools at work. It balances conceptual knowledge, ethical awareness, and applied skill, and can be adapted for self-assessment or formal training. Instructions: Choose the best answer (A, B, C, or D) for each question. Each correct answer = 1 point. Interpretation key follows below. 1. What is the main difference between traditional software and AI systems? A. AI systems never make mistakes B. AI systems learn from data rather than following fixed rules C. AI systems are programmed by humans to do one specific task only D. AI systems don’t need electricity Correct answer: B 2. Which of the following best defines “Generative AI”? A. AI that predicts stock prices B. AI that can create new content (text, images, code, etc.) based on training data C. AI that generates electricity D. AI that manages databases Correct answer: B 3. If you ask ChatGPT for help writing an email and then edit it to fit your tone, this is an example of: A. AI replacing human work B. Human–AI collaboration (augmentation) C. Algorithmic bias D. Deepfake creation Correct answer: B 4. Which of the following is a major ethical risk of AI? A. Too much human empathy B. Algorithmic bias leading to unfair outcomes C. Faster decision-making D. High energy efficiency Correct answer: B 5. What does “AI hallucination” mean? A. AI creating false or made-up outputs that sound plausible B. AI visualizing data C. AI having emotions D. AI overheating due to overuse Correct answer: A 6. Which of the following statements is TRUE about data privacy and AI? A. AI systems never store your data B. Data used to train or run AI may contain sensitive personal information C. AI makes all data anonymous automatically D. Data privacy laws don’t apply to AI systems Correct answer: B 7. What is the best way to ensure reliable AI output? A. Accept all AI answers as correct B. Verify and fact-check outputs using trusted human or data sources C. Use AI only for creative writing D. Ignore the AI’s sources Correct answer: B 8. Which of these professions is least likely to be fully automated by AI? A. Graphic design B. Customer service C. Psychotherapy D. Data entry Correct answer: C 9. “Prompt engineering” refers to: A. Writing code to create AI models B. Crafting precise inputs or questions to get better AI responses C. Building robots D. Programming hardware chips Correct answer: B 10. The EU AI Act requires organizations to: A. Replace humans with AI wherever possible B. Ban all generative AI C. Ensure staff have adequate AI literacy and training D. Only use open-source AI Correct answer: C Scoring & Interpretation 0–3: AI Beginner — You’re curious but need to learn the basics. Try a short AI literacy course. 4–7: AI Aware — You understand the concepts but need more practical experience. Start experimenting with AI tools. 8–10: AI Fluent — You can work effectively with AI and critically assess its risks and benefits. Keep refining your skills. View the full article
  19. What’s the best way to respond when customers, former fans, or anyone else criticizes your work? Taylor Swift just provided a perfect script for what to say. It’s a great example for any entrepreneur, business leader, or creator to follow. Swift’s 12th album, The Life of a Showgirl, released 10 days ago, is unquestionably a commercial success. It broke streaming records on Spotify with more than five million pre-saves, as just one example. But that doesn’t mean that everyone loves it. The reaction from music critics has been lukewarm and the reaction from fans is decidedly mixed, with some saying they adore the album and others saying they can’t stand it. One brand strategist declared on Instagram that the album was “flopping,” in a post that’s been seen more than 1.4 million times, according to Newsweek. Swift, of course, is a very seasoned performer who has always written her own rules and has a finely tuned sense of how to communicate with her fans. So the mixed reactions don’t seem to faze her at all. During an interview for Apple Music, she explained how she feels about the negative reactions. But if you’re pressed for time, ET posted a video report less than three minutes long. It explains the controversy and includes clips of Swift’s pitch perfect response. Here’s some of what she did right. 1. She thanked her critics I do this too, with most negative feedback I get from readers or audience members. As Swift well knows, the fact that someone takes the trouble to give an opinion about your work means they care enough to pay attention to you. And in today’s attention economy, that is a gift. “The rule of show business is, if it’s the first week of my album release and you are saying either my name or my album title, you’re helping,” she said. “I have a lot of respect for people’s subjective opinions on art.” She’s right, of course. The fact that people’s opinions of this album are divided could bring new listeners because people who normally aren’t interested in her music may become curious to hear the songs and form their own opinions. 2. She put the focus on her fans This is something Swift does extraordinarily well and it’s one reason for her outsize success. And so, she very wisely made the criticism about them, rather than about her. “Our goal as entertainers is to be a mirror,” she said. “What you’re going through in your life is going to affect whether you relate to the music that I’m putting out at any given moment.” She added that she loves it when fans tell her they used to love one of her albums and, based on the events in their own lives, come to favor another. It was a very clever comment. It invited people to consider how their own feelings or preferences might affect their opinions. And it gave them permission to change their minds in the future. 3. She said she had done her best work Whatever fans or critics may say about Showgirl, Swift made it clear that she herself is happy with it. “When I’m making my music, I know what I made. I know I adore it,” she said. And she did something very, very clever. She slyly pointed out that getting criticism is fitting given the nature of this particular album. The title track describes the bittersweet life of a performer: “I make my money being pretty and witty.” But also: “I paid my dues, with every bruise I knew what to expect.” And so, she told Zane Lowe, “On the theme of what the showgirl is, all of this is part of it.” —Minda Zetlin This article originally appeared on Fast Company’s sister publication, Inc. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy. View the full article
  20. In what might be the most up-front leave request of the year, a Gen Z employee emailed his boss asking for 10 days off to recover from a breakup. “I recently had a breakup and haven’t been able to focus on work. I need a short break,” they wrote in an email that was recently screenshotted and posted to X. Entrepreneur and CEO Jasveer Singh shared the unusually candid request on social media, captioning it: “Got the most honest leave application yesterday. Gen Z doesn’t do filters!” (Singh just so happens to be the cofounder and CEO of Knot Dating, a dating app. Coincidence?) Whether the email was genuine or a clever PR stunt, it gained nearly 14 million views since it was posted Tuesday, sparking the debate: should heartbreak qualify as a legitimate reason to take time off work? Workplaces are generally sympathetic to time off for illness or family emergencies. But when it comes to a messy breakup, that empathy tends to dry up quickly. Across the U.S., “heartbreak leave” isn’t standard policy. Telling your boss you need a few days because a parent is sick sounds reasonable. Admitting you’ve had a fight with your partner and are currently crashing on a friend’s sofa? Not so much. Often, workers might take personal days for such events, but there’s certainly no widespread PTO policy around breakups. Yet in other countries, the idea isn’t as far-fetched. In Germany, employees can take leave for liebeskummer, which translates to “love grief.” Other companies allow for heartbreak leave under the guise of “well-being days” or “mental health days.” Studies show that our brains register emotional pain in the same way as physical pain, and in some cases, it can even lead to “broken heart syndrome,” which literally affects the heart’s ability to pump blood properly around the body. From a boss’s perspective, emotionally checked-out employees can cost companies just as much as absenteeism. A 2022 University of Minnesota study found that 44% of people going through divorce said it negatively affected their work. Many reported struggling to focus, sleep, or control their emotions. That leaves employees either telling white lies to secure the necessary time off to heal, or powering through . . . likely with regular breaks to sob in the bathroom before returning to their desk swollen-eyed and puffy-faced. In recent years, following the pandemic-era trend of more power to the workers, people have pushed more for additional benefits beyond just the ability to work hybrid or remotely. In the U.S., some states offer bereavement leave for pets, a trend that’s gained momentum. Menstrual leave has also entered the conversation, as has gender affirmation leave. Not everyone will want or need heartbreak leave, mind you. Some people prefer to throw themselves into work as a distraction. But acknowledging the end of a relationship as a valid source of suffering could go a long way toward building a more empathetic workplace. As for the Singh’s heartbroken employee? “Leave approved without any questions,” he confirmed. View the full article
  21. Reform UK in government could transform central bank constitutionally View the full article
  22. Anthony Williams also charged with one count of actual bodily harm and possession of a bladed articleView the full article
  23. Reform leader will set out party’s ‘economic vision’ for Britain in bid to boost credibility on economyView the full article
  24. Movement for family and humanitarian reasons continued to rise in 2024View the full article
  25. As more than 19 million U.S. college students prepare to wrap up their fall semester and begin looking ahead to securing internships and jobs next spring, it’s natural for them—and their families—to worry about the fate of the job market in the age of AI. Indeed, Anthropic’s CEO predicted this summer that within the next five years—and maybe even sooner—adoption of AI could reduce entry-level hiring in white-collar professions by 50%. The impact is already being felt: postings for early-career corporate jobs are down 15%, while applications have spiked 30%. A separate Stanford study found that AI displacement, at this point, seems to be disproportionately affecting younger workers. To be sure, these changes are unsettling. But—despite current, often overheated rhetoric—they’re not unprecedented. Of course, we’ve heard about the lamplighters and horseshoe makers. A hundred years ago, they were displaced by electricity and cars, and the economy soldiered on and they found something else to do. But the internet bubble 25 years ago, when we were first launching our own careers, is an even more salient example. Discourse around the emerging “information superhighway” also sparked dystopian predictions that tens of millions of people would lose their jobs to internet-enabled automation, leading to “the end of work.” The job displacement, in some cases, was real. One of us (Dorie) began her career as a journalist at a weekly newspaper and, only a year into her first job, was laid off when the economics of the ad-supported paper faltered. But Dorie—like most of us—managed to adapt, finding new jobs in politics and nonprofit management before becoming an entrepreneur. And the overall economy did just fine, with a current unemployment rate of just 4.3%, compared with 4.9% in 2001, when Dorie lost her job. The pattern is also clear in terms of individuals’ lived experience. Alexis, along with her coauthor Nancy Hill, has researched Harvard’s Class of 1975, examining generational differences and patterns. Her surprising conclusion is that the experience of today’s college students is remarkably similar to that of students 50 years prior. Despite changing external circumstances (whether it’s campus protests about the Vietnam War or Gaza, and the political realities of a Nixon or a The President administration), students’ professional hopes and worries remain fundamentally the same. Can I find a career that feels interesting and meaningful? What are the “best” skills to cultivate, and where should I focus my professional development? Can I support myself, and eventually a family, in changing economic conditions? So—in the midst of these real, but familiar, concerns—what advice can we share about how to prepare for the age of AI without panicking? 1. Use AI as a competitive advantage First, take advantage of the fact that there’s no incumbency advantage in AI use now. If you’re a newly minted law school graduate, a senior partner with 30 years’ professional experience and connections will almost always hold an advantage over you in their knowledge of case law and ability to land clients. But no professional outside academia has 30 years’ experience in AI, so young professionals have just as much of an opportunity as anyone to gain knowledge, expertise, and professional stature through their deployment of AI in their jobs. Indeed, AI is especially valuable for young adults, as studies show that AI usage is most beneficial for employees with the least experience. 2. Focus on developing a transferable skill set Second, focus on developing broad, transferable skill sets. We saw what happened when conventional wisdom (from politicians to business leaders) converged on the idea that everyone needed to be trained in software coding. Now, in the wake of layoffs at major tech companies and slowed hiring, newly minted software engineers are struggling to find jobs. If professional reinvention will be necessary for most of us throughout the course of our careers, we need to cultivate skills that can apply in multiple domains. For instance, when Dorie lost her job as a journalist, she applied her writing experience and knowledge of politics (the beat she covered) to pivot to her next job as a campaign spokesperson. 3. Build relationships Finally, lean into interpersonal relationships, because—unlike you—AI can’t go to the watercooler. With enough data about meetings and emails, it’s true that it can analyze professional networks and see webs of influence within organizations. (Though many organizations are a long way from being able to fully deploy and capture the power of that analysis.) But, at least for the time being, AI won’t be able to pick up on what’s not captured in writing, from breakroom gossip and speculation to whispered advice and traded favors. Of course, we’re not suggesting that you become a Machiavellian operator, wielding insinuations and demanding reciprocity. But, in all of the discourse about what AI can and can’t replace, it seems clear that interpersonal connections – and the deep-seated principles that govern them, such as the general desire to reciprocate good deeds that others have done for us – are likely to persist. Investing in understanding other people and trying to help them where possible still seems like a worthy bet in the age of AI. In the past, young professionals could and did adapt to the new technological reality and find ways to make it their own. We believe this will happen again–and perhaps this might even take some of the pressure off the college experience, as students realize no one can predict the future and therefore, there’s no “right answer” to be had as we navigate life choices. View the full article




Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.