-
Posts
7,111 -
Joined
-
Last visited
Content Type
Profiles
Forums
Blogs
Events
Everything posted by ResidentialBusiness
-
I’ve been auditing Google Ads accounts for over 10 years. I can confidently say that the same issues appear in most accounts. The good news? These issues are easy to fix and can quickly improve performance. The five key areas where I consistently find missed opportunities include: Location targeting: A default Google Ads setting can cause your ads to reach users outside your intended area. This is easy to fix and can save you measurable amounts of money. Auto-applied recommendations: Allowing Google to auto-apply changes can lead to costly mistakes. It’s better to review and apply these manually, except in specific cases. Campaign structure: Different structures work best in different situations. Campaign experiments: This underused feature allows you to test and apply changes with minimal risk – yet 90% of accounts overlook it. Performance Max for lead gen: While PMax can drive lead volume, the quality is often low. It works best for ecommerce and is rarely ideal for lead generation. We’ll explore each of these areas in more detail to show you how to unlock better results from your Google Ads campaigns. 1. Optimizing location targeting settings This is the first item I check when auditing an account, and it’s usually set up incorrectly. Under the campaign settings, you can enter the target location, but it’s important not to overlook the details. Beneath the target location, there are two additional options: Presence or interest. Presence. By default, Presence or interest is selected. This means your ads will reach people located in your target area and people who have shown interest in it – even if they’re far away. In most cases, it’s better to choose Presence to limit targeting to users physically in your specified location. To check how much you’ve spent on users outside your target location, build a custom dashboard: Navigate to Campaigns > Dashboards. Add Country/Territory (User location) as a row. Include metrics like Cost, Clicks, or Impressions. Be sure to select User location rather than Matched location. This shows where users were actually located when they saw your ads. For example, a client targeting people in Australia discovered that, while most ad spend was correctly allocated, a significant amount still went to users outside Australia. This happened because the default Presence or interest setting was left unchanged – benefiting Google but wasting the advertiser’s budget. This simple report helps you identify how much money you can save by adjusting your location settings. Dig deeper: Improve your Google Ads performance: 3 simple setting changes 2. Taking control of auto-applied recommendations Google serves millions of advertisers with varying experience levels. While Google Ads provides useful tools for low-touch advertisers, they are not always ideal for active managers focused on optimizing performance. If you want to manage your ad account effectively – which I highly recommend – this is another area where you can save money and improve results. Some Google Ads recommendations are valuable, while others are not. Leaving decisions to the system is poor practice for active managers. Auto-applied recommendations should be turned off. Instead, review and apply them manually weekly. You can find auto-applied recommendations in the Recommendations tab: Some auto-applied recommendations can be harmful if left unchecked: “Add responsive search ads”: This allows the system to create new ad headlines and descriptions using content from your website. I recommend reviewing all ads before deployment. Leaving it to Google can result in awkward ad copy that may harm your brand and create compliance or legal risks. “Add new keywords”: This applies new keyword targeting, which may include irrelevant or broad match keywords. While some suggestions are useful, it’s best to review them manually. However, some auto-applied recommendations are generally harmless and can be enabled without manual oversight: “Use optimized ad rotation”: This shows higher-performing ads more frequently instead of splitting impressions evenly. If you’re comfortable letting Google decide which ads to prioritize, this can be useful. “Remove non-serving keywords”: This helps reduce account clutter by removing keywords that do not receive impressions, which is usually beneficial. Each account is unique, so evaluate these options based on your specific needs. Dig deeper: Top Google Ads recommendations you should always ignore, use, or evaluate Get the newsletter search marketers rely on. Business email address Sign me up! Processing... See terms. 3. Simplifying and aligning your campaign structure There are many ways to structure Google Ads campaigns. While no single approach fits every business, some structures are less effective today. Common campaign structures include: Keyword match types: Separate campaigns for exact match and broad match keywords, where the same keyword appears in different campaigns with different match types. SKAGs (single keyword ad groups): Each ad group targets a single keyword, allowing highly specific ad experiences. This approach requires many campaigns and ad groups. Locations: One campaign per geographic region, such as a city, state, or suburb. The best structure depends on your business context. For instance, a hyper-local service like a locksmith or dentist benefits from location-based campaigns. Why automated bidding changed campaign structure Campaigns built around keyword match types are becoming less relevant due to automated bidding. This system lets Google’s AI adjust bids across keywords, reducing the need for manual bidding. Automated bidding works best when keywords are grouped together, giving the system more data to optimize performance. Manual bidding is still useful in specific cases, like new service launches or managing high-performing (hero) keywords. Focus on customer search intent The most effective campaign structures mirror how customers search and engage with your product. Start by understanding their search behavior and align your campaigns accordingly. For example: A dentist may offer emergency, general, and root canal services. However, customers often search for “cheap dentist,” “dentist near me,” or “best-reviewed dentist.” Campaigns should reflect these search patterns, not just the business’s internal service categories. A mortgage restructuring company might label its service technically, but people are more likely to search for terms like “change my loan” or “update mortgage rate.” Targeting these common phrases improves results. Capture sub-niches for better performance Successful campaigns target sub-niches with enough search volume to drive results. For instance: A bank offering multiple products – loans, bank accounts, and credit cards – can improve performance by drilling down into specific categories like rewards cards or low annual fee cards. Users searching for “rewards cards” show a clearer intent than those searching for “credit cards.” By matching your campaign structure to user intent, you create a seamless path from search keyword → ad copy → landing page – improving both relevance and performance. It’s critical to avoid key mistakes when building your Google Ads account structure. Do build campaigns that reflect customer search intent and are as simple as possible. Don’t rely on outdated, complex structures that hinder automated bidding. Dig deeper: PPC keyword strategy: How to align search intent with funnel stages 4. Leveraging Google Ads Experiments If your Google Ads account is running smoothly, the next step is to unlock additional performance – this is where Google Ads Experiments come in. Surprisingly, many account managers overlook this powerful tool, which allows you to test changes with minimal risk and confidently improve your campaigns. Here’s how to effectively use them: Define your test: Identify a specific change you want to evaluate – such as increasing bids by a percentage, adding new keywords, or adjusting keyword match types. Apply the change: Implement the change to a portion of the traffic (50% is a common starting point) while keeping the other half as a control group. Measure the results: Monitor key metrics (CTR, CPA, ROAS) in real time. The platform provides statistical significance to help you evaluate performance. Act on the outcome: If the change improves performance, apply it to the entire campaign with a single click. If results decline, you can easily revert the campaign to its previous state. Without experiments, you’re either making changes blindly or hesitating to implement major updates due to uncertainty. Google Ads Experiments offer a safe and reliable way to test, refine, and optimize your account – helping you stay agile while minimizing risk. Dig deeper: What 54 Google Ads experiments taught me about lead gen 5. Refining Perfomance Max for lead generation Performance Max was originally designed for ecommerce and tends to deliver solid results in that context. However, for non-ecommerce businesses – such as lead generation or SaaS signups – its performance is often underwhelming. Here’s why PMax may fall short for lead generation and what to do instead: Lead quality issues While PMax can generate a high volume of leads, these leads often lack quality. Many lead generation businesses initially see promising results but are disappointed upon closer inspection. Why it works for ecommerce PMax performs better when paired with a product feed, allowing for more precise targeting. You can further refine performance by segmenting your product feed by categories or by top and bottom performers. Challenges for lead generation Without a product feed, Google heavily favors Google Display Network (GDN) inventory. This often results in a flood of low-cost but low-quality leads – many of which may be spam. A better approach for lead generation is to separate Search and Display campaigns: Create dedicated Search and Display campaigns to control your budget and targeting on each network. Use a dedicated GDN campaign for remarketing and custom search intent to maintain better oversight. While setting up separate campaigns requires more effort than using a PMax campaign, it usually yields higher-quality leads and better long-term results. For lead generation businesses, relying on PMax without close monitoring and segmentation is unlikely to produce sustainable success. Dig deeper: How to use Performance Max for any type of business Fine-tune your Google Ads campaigns with these optimizations Small changes can make a big difference in Google Ads. By refining targeting, controlling automation, structuring campaigns effectively, testing with experiments, and using PMax wisely, you’ll drive better results and reduce wasted spend. View the full article
-
Autodesk forecast annual revenue and profit above Wall Street estimates on Thursday, boosted by strong demand for its design and engineering software across industries such as construction and manufacturing. The company also said it would reduce its workforce by about 9%, representing roughly 1,350 employees, and laid out plans to invest more heavily in cloud and artificial intelligence, adding that it would reallocate resources towards those areas. Companies across sectors such as architecture, engineering, construction, and product design are making extensive use of Autodesk’s 3D design solutions, with the software maker’s artificial intelligence and machine learning capabilities further driving spending on its products. Autodesk saw a 23% jump in total billings to $2.11 billion in the fourth quarter ended January 31. The company’s international operations have particularly shown strength, while analysts have also noted that the company was outpacing peers in the manufacturing sector, driven by the performance of its “Fusion” design software. Shares of the San Francisco, California-based company were up about 2% in extended trading. Autodesk expects full-year revenue between $6.90 billion and $6.97 billion, largely above analysts’ average estimate of $6.90 billion, according to data compiled by LSEG. It projected an adjusted profit between $9.34 and $9.67 per share for its fiscal year 2026, also above the $9.24 per share estimated by analysts. The company reported total revenue of $1.64 billion in the fourth quarter, up 12% from last year and above analysts’ average estimate of $1.63 billion. It posted an adjusted profit of $2.29 per share, beating estimates of $2.14 per share. —Deborah Sophia, Reuters View the full article
-
It’s an understatement to say that cryptocurrency investors have not had a great week. Tokens across the board have seen double-digit falls, slashing thousands from their individual values. However, one of the most affected coins this week is also the world’s most popular cryptocurrency: Bitcoin. In the past five days alone, Bitcoin’s value has dropped more than 16%, and today, the coin fell below an important psychological barrier. Here’s what you need to know about the likely reasons why Bitcoin and other cryptocurrencies are dropping. Bitcoin falls below $80,000 In early trading this morning, Bitcoin fell below the psychologically important $80,000 barrier. At the time of this writing, it is currently trading at around $79,900 per coin, though it had dropped to as low as around $78,400 earlier. When Bitcoin moves across a notable barrier like $60,000 or $100,000 (any increment of $10,000), it generally causes one of two reactions. If its move is increasing past the barrier, this tends to send optimism through the hearts of investors—How high can it go? However, if its move falls under the barrier, this tends to generate fear and pessimism—How low can it go? What is startling about Bitcoin’s fall is that the coin was trading above $95,000 at the beginning of this week. But by Tuesday, Bitcoin had fallen below the $90,000 threshold. Now, just three days later, Bitcoin has fallen below $80,000. That means that as of the time of this writing, Bitcoin has lost about 16% of its value in the last five days alone. But it has gotten worse when looking out over the last month. During that time, Bitcoin lost more than 20% of its value. Bitcoin hasn’t traded this low since shortly after President Trump won the election in November 2024. But it’s not just Bitcoin that is falling. Ethereum, XRP, DOGE, and TRUMP all down As of the time of this writing, other major cryptocurrencies and popular meme coins have all been down by a significant amount in the past day, according to data from Yahoo Finance and CoinMarketCap. Ethereum is down over 9% in the past 24 hours (and down over 24% in the past five days). XRP is down over 8.6% in the past 24 hours (and down over 20% in the past five days) Solana is down over 4% in the past 24 hours (and down over 20% in the past five days) Dogecoin is down over 10% in the past 24 hours (and down over 23% in the past five days) Official Trump is down over 13% in the past 24 hours (and down over 33% in the past seven days) Why are Bitcoin and other crypto dropping? When major assets drop, the first thing people want to know is “why?” Unfortunately, there are no firm answers to that, but there are two likely reasons why Bitcoin and other cryptocurrencies are seeing increased downward pressure this week. The first is Trump’s tariffs. The president says he plans to levy tariffs on goods coming into the United States from many of America’s major trading partners, including Mexico, Canada, China, and EU member states. Those countries, in turn, are expected to retaliate with tariffs on American goods, which could result in an all-out trade war that leads to higher prices for consumers, more rapid inflation, and reduced household discretionary spending. In other words, people are worried that Trump’s tariffs could negatively affect the economy. When the economy faces headwinds, investors tend to pull out of riskier and more volatile assets—like cryptocurrencies—in favor of placing their money into more stable assets. The second reason that may be contributing to crypto’s fall this week is the ByBit hack from earlier this month that saw hackers steal $1.5 billion worth of cryptocurrencies. That heist, which is believed to be the largest ever crypto heist, has rattled crypto investors, making many feel that their cryptocurrency investments aren’t as secure as other investments, like stocks and properties. In other words, recent significant events are working against crypto’s favor. As for where Bitcoin and other cryptocurrencies go from here, that’s anyone’s guess. View the full article
-
As President Donald Trump’s administration looks to reverse a cornerstone finding that climate change endangers human health and welfare, scientists say they just need to look around because it’s obvious how bad global warming is and how it’s getting worse. New research and ever more frequent extreme weather further prove the harm climate change is doing to people and the planet, 11 different scientists, experts in health and climate, told The Associated Press soon after word of the administration’s plans leaked out Wednesday. They cited peer-reviewed studies and challenged the Trump administration to justify its own effort with science. “There is no possible world in which greenhouse gases are not a threat to public health,” said Brown University climate scientist Kim Cobb. “It’s simple physics coming up against simple physiology and biology, and the limits of our existing infrastructure to protect us against worsening climate-fueled extremes.” EPA’s original finding on danger of greenhouse gases Environmental Protection Agency chief Lee Zeldin has privately pushed the White House for a rewrite of the agency’s finding that planet-warming greenhouse gases put the public in danger. The original 52-page decision in 2009 is used to justify and apply regulations and decisions on heat-trapping emissions of greenhouse gases, such as carbon dioxide and methane, from the burning of coal, oil and natural gas. “Carbon dioxide is the very essence of a dangerous air pollutant. The health evidence was overwhelming back in 2009 when EPA reached its endangerment finding, and that evidence has only grown since then,” said University of Washington public health professor Dr. Howard Frumkin, who as a Republican appointee headed the National Center for Environmental Health at the time. “CO2 pollution is driving catastrophic heat waves and storms, infectious disease spread, mental distress, and numerous other causes of human suffering and preventable death.” That 2009 science-based assessment cited climate change harming air quality, food production, forests, water quality and supplies, sea level rise, energy issues, basic infrastructure, homes and wildlife. A decade later, scientists document growing harm Ten years later, a group of 15 scientists looked at the assessment. In a paper in the peer-reviewed journal Science they found that in nearly all those categories the scientific confidence of harm increased and more evidence was found supporting the growing danger to people. And the harms were worse than originally thought in the cases of public health, water, food and air quality. Those scientists also added four new categories where they said the science shows harm from climate change caused by greenhouse gas emissions. Those were in national security, economic well-being of the country, violence and oceans getting more acidic. On national security, the science team quoted Trump’s then-defense secretary, chairman of the joint chiefs of staff and a Pentagon authorization bill that Trump signed in his first term. It also quoted a study that said another 1.8 degrees Fahrenheit (1 degree Celsius) of warming in the next 75 years would effectively reduce the U.S. gross domestic product by 3%, while another study said warming would cost the American economy $4.7 trillion to $10.4 trillion by the end of the century. “Overall, the scientific support for the endangerment finding was very strong in 2009. It is much, much stronger now,” Stanford University environment program chief Chris Field, a co-author of the 2019 Science review, said in a Wednesday email. “Based on overwhelming evidence from thousands of studies, the well-mixed greenhouse gases pose a danger to public health and welfare. There is no question.” Long list of climate change’s threats to health “There is global consensus that climate change is the biggest threat of our to time to both health and health systems,” said Dr. Courtney Howard, a Canadian emergency room physician and vice chair of the Global Climate and Health Alliance. He ticked off a long list: heat-related illnesses, worsening asthma, heart diseases worsened by wildfire smoke, changing habit for disease-carrying mosquitoes, ticks and other insects, and crop failures that drive hunger, war and migration. Kristie Ebi, a public health and climate scientist at the University of Washington, said a big but little-discussed issue is how crops grown under higher carbon dioxide levels have less protein, vitamins and nutrients. That’s 85% of all plants, and that affects public health, she said. Field experiments have shown wheat and rice grown under high CO2 have 10% less protein, 30% less B-vitamins and 5% less micronutrients. It’s these indirect effects on human health that are “far-reaching, comprehensive and devastating,” said Katharine Hayhoe, an atmospheric scientist at Texas Tech and chief scientist at The Nature Conservancy. She said rising carbon dioxide levels in the air even “ affect our ability to think and process information.” Scientists said the Trump administration will be hard-pressed to find scientific justification — or legitimate scientists — to show how greenhouse gases are not a threat to people. “This one of those cases where they can’t contest the science and they’re going to have a legal way around,” Princeton University climate scientist Michael Oppenheimer said. The Associated Press’ climate and environmental coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org. —Seth Borenstein, AP science writer Associated Press writer Matthew Daly contributed. View the full article
-
Another week and more Google search ranking volatility hit mid-week, did you notice? Google's crawler might be causing issues on your site this week. Google still is making its crawling more efficient and better. Google Search Console's API had delayed this week. Google was sued over AI Overviews...View the full article
-
It’s no longer groundbreaking to say that the SEO landscape is evolving. But this time, the shift is fundamental. We’re entering an era where search is no longer just about keywords but understanding. At the core of this shift is vector-based SEO. Optimizing for vectors gives websites a major advantage in search engines and overall web presence. As AI and large language models (LLMs) continue to shape digital experiences, websites that adapt early will stay ahead of the competition. What are vectors? Vectors are a mathematical way for AI to understand and organize information beyond just text. Instead of relying on exact keyword matches, search engines now use vector embeddings – a technique that maps words, phrases, and even images into multi-dimensional space based on their meaning and relationships. Think of it this way: If a picture is worth a thousand words, vectors are how AI translates those words into patterns it can analyze. For SEOs, a helpful analogy is that vectors are to AI what structured data is to search engines – a way to provide deeper context and meaning. How vectors change search By leveraging semantic relationships, embeddings, and neural networks, vector-based search allows AI to interpret intent rather than just keywords. This means search engines can surface relevant results even when a query doesn’t contain the exact words from a webpage. For example, a search for “Which laptop is best for gaming?” may return results optimized for “high-performance laptops” because AI understands the conceptual link. More importantly, vectors help AI interpret content that isn’t purely text-based, which includes: Colloquial phrases (e.g., “bite the bullet” vs. “make a tough decision”) Images and visual content. Short-form videos and webinars. Voice search queries and conversational language. Source: Airbyte This shift has been years in the making. Google has been moving toward vector-based search for over a decade, starting with the Hummingbird update in 2013, which prioritized understanding content over simple keyword matching. You might recall RankBrain, Google’s first AI-powered algorithm from 2015, which paved the way for BERT, MUM, and Microsoft’s enhanced Bing Search – all of which rely on vectorized data to interpret user intent with greater accuracy. At its core, vector-based search represents a fundamental change: SEO is no longer about optimizing for exact words but for meaning, relationships, and relevance. As AI continues to evolve, websites that adapt to this approach will have a significant advantage. Dig deeper: AI optimization: How to optimize your content for AI search and agents How vectors impact your SEO strategy So, what does this mean for SEO? If “content is king” has been the mantra for the past decade, then “content is emperor” might be the new reality. A king rules over one kingdom, but an emperor governs many. Similarly, making your content readable to AI doesn’t just improve search engine visibility. It makes your website discoverable across a broader range of AI-driven tools that generate answers to user queries. Practically speaking, there are a few key ways SEOs should adjust their approach to keep websites future-ready. Here are three strategies to start with. From content strategy and keyword research to semantic topic modeling Search volume and keyword difficulty will remain key metrics for now. However, AI tools can provide deeper insights – such as identifying the entities and topics Google associates with your competitors’ content. Instead of just checking keyword volume, analyze the top-ranking pages using NLP tools to see how they structure their topics. Adjust your content briefs to cover semantically related topics, not just one keyword/variations of that keyword. From content optimization to intent matching and semantic SEO Traditional SEO prioritizes exact match keywords and their variations, while AI-driven optimization focuses on aligning with search intent. This means you’ll want to: Run your content through Google’s NLP API to see which topics/entities it detects and compare with competitors that may be ranking better than you. Optimize existing content not only to add keywords, but to add missing context and answer related user queries, by using AlsoAsked and AnswerThePublic. Source: “How to use 12 micro intents for SEO and content journey mapping,” Olaf Kopp From SERP and ranking predictions to AI-based performance forecasting Traditionally, site changes required weeks or months to assess ranking impact. Now, AI can predict performance using vector analysis, giving you another data point for smarter decision-making. Before publishing, paid AI tools like Clearscope or MarketMuse can score your content against high-performing pages. (For smaller projects, free tools like Google Cloud NLP demo offer similar insights.) Use a paid tool like SurferSEO’s SERP Analysis or Outranking.io’s free plan to prioritize content updates based on their likelihood to rank. How vectors don’t change SEO strategy We’re not reinventing the wheel. AI still relies on many of the same principles as traditional SEO. Even if you’re not ready to fully integrate vector-based strategies, you can still optimize your site with them in mind. Great content matters above all else Comprehensive, intent-focused content remains essential for both users and AI, and its importance will only grow. If you haven’t already structured your pages around user intent, now is the time. Write in natural language, focusing on fully answering user queries. Ensure your pages pass the blank sheet of paper test (i.e., they provide unique value on their own). Include synonyms, related phrases, and different ways users might phrase questions. Technical SEO gives AI the roadmap it needs Search engines – and the AI models behind them – still rely on clear signals to understand and rank content effectively. It stands to reason that the use of many of these signals will remain consistent, at least for now. Use structured data to give search engines and AIs more context about the content they’re analyzing. Craft an internal link strategy that makes sense to the average person and demonstrates strong semantic connections between your pages. Dig deeper: Optimizing for AI search: Why classic SEO principles still apply What’s next? As search engines increasingly rely on AI and LLMs, SEO is shifting away from a sole focus on keywords and toward the broader, more intricate concept of meaning. AI systems interpret meaning through vectors, leveraging semantic relationships, embeddings, and neural networks. You can prepare for this shift by optimizing for vector-based search focusing on user intent, content depth, and semantic connections. AI may be the new frontier, but those who embrace change early have the greatest opportunity to drive innovation and shape the future. View the full article
-
Microsoft Bing Search's Copilot Answers now can show maps and local results, including local ads. The AI answer has a map on the left side and then on the right side it has local organic and paid listings with the company name, address, phone, website and directions - plus a photo. View the full article
-
Recently, I overheard a conversation at a local coffee shop: “Thank god for the new administration and finally taking a stand against DEI,” said one of the men to another, as they sipped their coffee. “It’s ridiculous and unfair, completely ruining work. We can finally get back to business.” I leaned in a bit further to try and listen in as I paid for my Earl Gray tea. “Well . . . I’m not sure that’s entirely true,” the other man said hesitating. “I think that . . . ” “Finally, we can get back to raising standards,” the other individual interrupted. “It’s about time! By the way, are you going to the game next week?” The other individual looked uncomfortable as the conversation swiftly shifted in a completely different direction. While I was done paying, and also done eavesdropping, I left knowing that what I heard in this local coffee shop was not an isolated conversation. The backlash against diversity, equity, and inclusion is playing out on the national and world stage almost every single day. And the backlash is also taking place on much smaller stages, in conversations in our conference rooms and in our hallways, amongst colleagues loudly and in whispers in our workplaces. And in these conversations, there’s an opportunity to talk and educate each other about what diversity, equity, and inclusion is and what diversity, equity, and inclusion is not. Here are three of the most common statements I am hearing from individuals for the case against diversity, equity, and inclusion, and here’s how we can debunk these statements and continue to help educate each other on what is true and what is not. False argument against DEI: We lower our standards when it comes to talent Diversity, equity, and inclusion is not about lowering our standards; diversity, equity, and inclusion is about setting fair and equitable standards on how we evaluate all talent. The term “DEI hire” is being used to make us believe that we have lowered standards by hiring individuals from different backgrounds and different lived experiences. In reality, “DEI hire” is a harmful and a hurtful phrase that leads many to believe that someone was handed a job simply because they may “look different” or “be different” or are a “quota hire.” And it is increasingly becoming an acceptable way to discredit, demoralize, and disrespect leaders of color. One of the key outcomes of diversity, equity, and inclusion is creating standardized processes on how we hire talent, and also on who we choose to develop and promote. This includes using software tools like Greenhouse, which helps you ensure that every candidate for a role meets with the same set of interviewers, that interview questions are aligned in advance, and that there’s a way to evaluate and score the interviews and debrief together as an interview team. Otherwise, we fall prey to our biases and may hire people who look like us, think like us, and act like us, or simply hire them because we really just like them. When it comes to how we develop and promote talent, software tools like Lattice help us ensure we set clear and reasonable goals for all, and not just some employees. We can then track progress in weekly meetings, we can give and receive coaching and feedback, and we can have a consistent framework when we evaluate talent during performance review time. And how we evaluate talent is also then connected to how we compensate individuals, and ultimately who we chose to promote. Without these standardized processes, we may end up giving better performance reviews and more money to those who are the most vocal, who spend the most time managing up to us, and who we just find ourselves having more in common with. Diversity, equity, and inclusion efforts help us raise standards and make sure we are getting the best out of our talent. False argument against DEI: It distracts ourselves from driving revenue Diversity, equity, and inclusion does not distract us from leading our businesses; in fact, diversity, equity, and inclusion is a driver of the business. It’s not a separate initiative that sits apart from the business; it should be integrated into everything we do in our workplaces. These efforts not only help us ensure that we get the best out of our talent, but it also ensures we are able to best serve our customers. According to Procter & Gamble, the buying power of the multicultural consumer is more than $5 trillion. Procter & Gamble reminds us that it’s no longer multicultural marketing; it’s in fact mainstream marketing. There is growth to be had when we ensure we connect and authentically serve not just the multicultural consumer, but also veterans, individuals with disabilities, the LGBTQ+ community, and many more communities. Understanding their consumer needs and how your businesses’ products and services can surprise and delight them, and enhance the quality of their lives, is an untapped competitive advantage. Companies like E.L.F. understand this, with a strong focus on diversity, equity, and inclusion efforts that have paid off: It has posted 23 consecutive quarters of sales growth. Over the past five years, the company has also seen its stock increased by more 700%. In contrast, since Target announced a roll back on its diversity, equity, and inclusion efforts, it’s experienced a decline in sales. Black church leaders are now calling on their congregations to participate in a 40 day boycott of Target. Black consumers have $2 trillion in buying power, setting digital trends and engagement. “We’ve got to tell corporate America that there’s a consequence for turning their back on diversity,” said Bishop Reginald T. Jackson, to USA Today. “So let us send the message that if corporate America can’t stand with us, we’re not going to stand with corporate America.” False argument against DEI: An inclusive work environment only benefits a few Diversity, equity, and inclusion is not about creating an inclusive environment for a select few. Diversity, equity, and inclusion is about creating workplaces where we all have an opportunity to reach our potential and help our companies reach their potential. In my book, Reimagine Inclusion: Debunking 13 Myths to Transform Your Workplace, I tackle the myth that diversity, equity, and inclusion processes and policies only have a positive effect on a certain group of individuals. I share “The Curb-Cut Effect” which is a prime example of this. In 1972, faced with pressure from activists advocating for individuals with disabilities, the city of Berkeley, California, installed its first official “curb cut” at an intersection on Telegraph Avenue. In the words of a Berkeley advocate, “the slab of concrete heard ‘round the world.” This not only helped people in wheelchairs. It also helped parents pushing strollers, elderly with walkers, travelers wheeling luggage, workers pushing heavy carts, and the curb cut helped skateboarders and runners. People went out of their way and continued to do so, to use a curb cut. “The Curb-Cut Effect” shows us that one action targeted to help a community ended up helping many more people than anticipated. So, in our workplaces, policies like flexible work hours and remote work options, parental leave and caregiver assistance, time off for holidays and observances, adaptive technologies, mental health support, accommodations for individuals with disabilities, and more have a ripple effect and create workplaces where everyone has an opportunity to thrive. Don’t fall for the rhetoric against “DEI” being exclusive, unfair, or a distraction. The goal of diversity, equity, and inclusion efforts has always been about leveling the playing field and ensuring we are creating workplaces where each and everyone of us have an opportunity to succeed. View the full article
-
A year or more ago, Google added a toggle switch to use Gemini directly in the iOS Google Search app. Well, that was removed this week and you now need to go to the native Gemini app to use Gemini, you cannot use the Google app.View the full article
-
The notion of authenticity in the movies has moved a step beyond the merely realistic. More and more, expensive and time-consuming fixes to minor issues of screen realism have become the work of statistical data renderings—the visual or aural products of generative artificial intelligence. Deployed for effects that actors used to have to create themselves, with their own faces, bodies, and voices, filmmakers now deem these fixes necessary because they are more authentic than what actors can do with just their imaginations, wardrobe, makeup, and lighting. The paradox is that in this scenario, “authentic” means inhuman: The further from actual humanity these efforts have moved, the more we see them described by filmmakers as “perfect.” Is perfect the enemy of good? It doesn’t seem to matter to many filmmakers working today. These fixes are designed to be imperceptible to humans, anyway. Director Brady Corbet’s obsession with “perfect” Hungarian accents in his Oscar-nominated architecture epic, The Brutalist, is a case in point. Corbet hired the Ukraine-based software company Respeecher to enhance accents by using AI to smooth out vowel sounds when actors Adrien Brody and Felicity Jones (American and British, respectively) speak Hungarian in the film. Corbet said it was necessary to do that because, as he told the Los Angeles Times, “this was the only way for us to achieve something completely authentic.” Authenticity here meant integrating the voice of the film’s editor, Dávid Jánsco, who accurately articulated the correct vowel sounds. Jánsco’s pronunciation was then combined with the audio track featuring Brody and Jones, merging them into a purportedly flawless rendition of Hungarian that would, in Corbet’s words in an interview with GQ, “honor the nation of Hungary by making all of their off-screen Hungarian dialogue absolutely perfect.” The issue of accents in movies has come to the fore in recent years. Adam Driver and Shailene Woodley were, for instance, criticized for their uncertain Italian accents in 2023’s Ferrari. Corbet evidently wanted to make sure that would not happen if any native Hungarian speakers were watching The Brutalist (few others would notice the difference). At times, Brody and Jones speak in Hungarian in the film, but mostly they speak in Hungarian-accented English. According to Corbet, Respeecher was not used for that dialogue. Let’s say that for Corbet this will to perfection, with the time and expense it entailed, was necessary to his process, and that having the voice-overs in translated Hungarian-accented English might have been insultingly inauthentic to the people of Hungary, making it essential that the movie sound, at all times, 100% correct when Hungarian was spoken. Still, whether the Hungarian we hear in The Brutalist is “absolutely perfect” is not the same as it being “completely authentic,” since it was never uttered as we hear it by any human being. And, as it turns out, it was partially created in reaction to something that doesn’t exist. In his interview with the Los Angeles Times, Corbet said that he “would never have done it any other way,” recounting when he and his daughter “were watching North by Northwest and there’s a sequence at the U.N., and my daughter is half-Norwegian, and two characters are speaking to each other in [air quotes] Norwegian. My daughter said: ‘They’re speaking gibberish.’ And I think that’s how we used to paint people brown, right? And, I think that for me, that’s a lot more offensive than using innovative technology and really brilliant engineers to help us make something perfect.” But there is no scene in Alfred Hitchcock’s 1959 film North by Northwest set at the United Nations or anywhere else in which two characters speak fake Norwegian or any other faked language. Furthermore, when Corbet brings in the racist practice of brownface makeup that marred movies like 1961’s West Side Story, he is doing a further disservice to Hitchcock’s film. The U.N. scene in North by Northwest features Cary Grant speaking with a South Asian receptionist played by Doris Singh, not an Anglo in brownface. Corbet’s use of AI, then, is based on something that AI itself is prone to, and criticized for: a “hallucination” in which previously stored data is incorrectly combined to fabricate details and generate false information that tends toward gibberish. While the beginning of Hitchcock’s Torn Curtain (1966) is set on a ship in a Norwegian fjord and briefly shows two ship’s officers conversing in a faked, partial Norwegian, Corbet’s justification was based on a false memory. His argument against inauthenticity is inauthentic itself. AI was used last year in other films besides The Brutalist. Respeecher also “corrected” the pitch of trans actress Karla Sofía Gascón’s singing voice in Emilia Pérez. It was used for blue eye color in Dune: Part Two. It was used to blend the face of Anya Taylor-Joy with the actress playing a younger version of her, Alyla Browne, in Furiosa: A Mad Max Saga. Robert Zemeckis’s Here, with Tom Hanks and Robin Wright playing a married couple over a many-decade span, deployed a complicated “youth mirror system” that used AI in the extensive de-agings of the two stars. Alien: Romulus brought the late actor Ian Holm back to on-screen life, reviving him from the original 1979 Alien in a move derided not only as ethically dubious but, in its execution, cheesy and inadequate. It is when AI is used in documentaries to re-create the speech of people who have died that is especially susceptible to accusations of both cheesiness and moral irresponsibility. The 2021 documentary Roadrunner: A Film About Anthony Bourdain used an AI version of the late chef and author’s voice for certain lines spoken in the film, which “provoked a striking degree of anger and unease among Bourdain’s fans,” according to The New Yorker. These fans called resurrecting Bourdain that way “ghoulish” and “awful.” Dune: Part Two [Photo: Warner Bros. Pictures] Audience reactions like these, though frequent, do little to dissuade filmmakers from using complicated AI technology where it isn’t needed. In last year’s documentary Endurance, about explorer Ernest Shackleton’s ill-fated expedition to the South Pole from 1914 to 1916, filmmakers used Respeecher to exhume Shackleton from the only known recording of his voice, a noise-ridden four-minute Edison wax cylinder on which the explorer is yelling into a megaphone. Respeecher extracted from this something “authentic” which is said to have duplicated Shackleton’s voice for use in the documentary. This ghostly, not to say creepy, version of Shackleton became a selling point for the film, and answered the question, “What might Ernest Shackleton have sounded like if he were not shouting into a cone and recorded on wax that has deteriorated over a period of 110 years?” Surely an actor could have done as well as Respeecher with that question. Similarly, a new three-part Netflix documentary series, American Murder: Gabby Petito, has elicited discomfort from viewers for using an AI-generated voice-over of Petito as its narration. The 22-year-old was murdered by her fiancé in 2021, and X users have called exploiting a homicide victim this way “unsettling,” “deeply uncomfortable,” and perhaps just as accurately, “wholly unnecessary.” The dead have no say in how their actual voices are used. It is hard to see resurrecting Petito that way as anything but a macabre selling point—carnival exploitation for the streaming era. Beside the reanimation of Petito and the creation of other spectral voices from beyond the grave, there is a core belief that the proponents of AI enact but never state, one particularly apropos in a boomer gerontocracy in which the aged refuse to relinquish power. That belief is that older is actually younger. When an actor has to be de-aged for a role, such as Harrison Ford in 2023’s Indiana Jones and the Dial of Destiny, AI is enlisted to scan all of Ford’s old films to make him young in the present, dialing back time to overwrite reality with an image of the past. Making a present-day version of someone young involves resuscitating a record of a younger version of them, like in The Substance but without a syringe filled with yellow serum. When it comes to voices, therefore, it is not just the dead who need to be revived. Ford’s Star Wars compatriot Mark Hamill had a similar process done, but only to his voice. For an episode of The Mandalorian, Hamill’s voice had to be resynthesized by Respeecher to sound like it did in 1977. Respeecher did the same with British singer Robbie Williams for his recent biopic, Better Man, using versions of Williams’s songs from his heyday and combining his voice with that of another singer to make him sound like he did in the 1990s. Here [Photo: Sony Pictures] While Zemeckis was shooting Here, the “youth mirror system” he and his AI team devised consisted of two monitors that showed scenes as they were shot, one the real footage of the actors un-aged, as they appear in real life, and the other using AI to show the actors to themselves at the age they were supposed to be playing. Zemeckis told The New York Times that this was “crucial.” Tom Hanks, the director explained, could see this and say to himself, “I’ve got to make sure I’m moving like I was when I was 17 years old.” “No one had to imagine it,” Zemeckis said. “They got the chance to see it in real time.” No one had to imagine it is not a phrase heretofore associated with actors or the direction of actors. Nicolas Cage is a good counter example to this kind of work, which as we see goes far beyond perfecting Hungarian accents. Throughout 2024, Cage spoke against AI every chance he got. At an acceptance speech at the recent Saturn Awards, he mentioned that he is “a big believer in not letting robots dream for us. Robots cannot reflect the human condition for us. That is a dead end. If an actor lets one AI robot manipulate his or her performance even a little bit, an inch will eventually become a mile and all integrity, purity, and truth of art will be replaced by financial interests only.” In a speech to young actors last year, Cage said, “The studios want this so that they can change your face after you’ve already shot it. They can change your face, they can change your voice, they can change your line deliveries, they can change your body language, they can change your performance.” And he said in a New Yorker interview last year, speaking about the way the studios are using AI, “What are you going to do with my body and my face when I’m dead? I don’t want you to do anything with it!” All this from a man who swapped faces with John Travolta in 1997’s Face/Off with no AI required—and “face replacement” is now one of the main things AI is used for. In an interview with Yahoo Entertainment, Cage shared an anecdote about his recent cameo appearance as a version of Superman in the much-reviled 2023 superhero movie The Flash. “What I was supposed to do was literally just be standing in an alternate dimension, if you will, and witnessing the destruction of the universe. . . . And you can imagine with that short amount of time that I had, what that would mean in terms of what I could convey—I had no dialogue—what I could convey with my eyes, the emotion. . . . When I went to the picture, it was me fighting a giant spider. . . . They de-aged me and I’m fighting a spider.” Now that’s authenticity. View the full article
-
The sky is about to get a lot clearer. NASA’s latest infrared space telescope, SPHEREx—short for Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer—will assemble the world’s most complete sky survey to better explain how the universe evolved. The $488 million mission will observe far-off galaxies and gather data on more than 550 million galaxies and stars, measure the collective glow of the universe, and search for water and organic molecules in the interstellar gas and dust clouds where stars and new planets form. The 1107-lb., 8.5 x 10.5-foot spacecraft is slated to launch March 2 at 10:09 pm (ET) aboard a SpaceX Falcon 9 rocket from Vandenberg Space Force Base in California. (Catch the launch on NASA+ and other platforms.) From low-Earth orbit, it will produce 102 maps in 102 infrared wavelengths every six months over two years, creating a 3D map of the entire night sky that glimpses back in time at various points in the universe’s history to fractions of a second after the Big Bang nearly 14 billion years ago. Onboard spectroscopy instruments will help determine the distances between objects and their chemical compositions, including water and other key ingredients for life. SPHEREx Prepared for Thermal Vacuum Testing [Photo: NASA/JPL-Caltech/BAE Systems] Mapping how matter dispersed over time will help scientists better understand the physics of inflation—the instantaneous expansion of the universe after the Big Bang and the reigning theory that best accounts for the universe’s uniform, weblike structure and flat geometry. Scientists hypothesize the universe exploded in a split-second, from smaller than an atom to many trillions of times in size, producing ripples in the temperature and density of the expanding matter to form the first galaxies. “SPHEREx is trying to get at the origins of the universe—what happened in those very few first instances after the Big Bang,” says SPHEREx instrument scientist Phil Korngut. “If we can produce a map of what the universe looks like today and understand that structure, we can tie it back to those original moments just after the Big Bang.” [Photo: BAE Systems/Benjamin Fry] SPHEREx’s approach to observing the history and evolution of galaxies differs from space observatories that pinpoint objects. To account for galaxies existing beyond the detection threshold, it will study a signal called the extragalactic background light. Instead of identifying individual objects, SPHEREx will measure the total integrated light emission that comes from going back through cosmic time by overlaying maps of all of its scans. If the findings highlight areas of interest, scientists can turn to the Hubble and James Webb space telescopes to zoom in for more precise observations. To prevent spacecraft heat from obscuring the faint light from cosmic sources, its telescope and instruments must operate in extreme cold, nearing—380 degrees Fahrenheit. To achieve this, SPHEREx relies on a passive cooling system, meaning no electricity or coolants, that uses three cone-shaped photon shields and a mirrored structure beneath them to block the heat of Earth and the Sun and direct it into space. Searching for life In scouting for water and ice, the observatory will focus on collections of gas and dust called molecular clouds. Every molecule absorbs light at different wavelengths, like a spectral fingerprint. Measuring how much the light changes across the wavelengths indicates the amount of each molecule present. “It’s likely the water in Earth’s oceans originated in a molecular cloud,” says SPHEREx science data center lead Rachel Akeson. “While other space telescopes have found reservoirs of water in hundreds of locations, SPHEREx will give us more than nine million targets. Knowing the water content around the galaxy is a clue to how many locations could potentially host life.” More philosophically, finding those ingredients for life “connects the questions of how `did the universe evolve?’ and `how did we get here?’ to `where can life exist?’ and `are we alone in that universe?’” says Shawn Domagal-Goldman, acting director of NASA’s Astrophysics Division. Solar wind study The SpaceX rocket will also carry another two-year mission, the Polarimeter to Unify the Corona and Heliosphere (PUNCH), to study the solar wind and how it affects Earth. Its four small satellites will focus on the sun’s outer atmosphere, the corona, and how it moves through the solar system and bombards Earth’s magnetic field, creating beautiful auroras but endangering satellites and spacecraft. The mission’s four suitcase-size satellites will use polarizing filters that piece together a 3D view of the corona capture data that helps determine the solar wind speed and direction. “That helps us better understand and predict the space weather that affects us on Earth,” says PUNCH mission scientist Nicholeen Viall. “This`thing’ that we’ve thought of as being big, empty space between the sun and the Earth, now we’re gonna understand exactly what’s within it.” PUNCH will combine its data with observations from other NASA solar missions, including Coronal Diagnostic Experiment (CODEX), which views the inner corona from the International Space Station; Electrojet Zeeman Imaging Explorer (EZIE), which launches in March to investigate the relationship between magnetic field fluctuations and auroras; and Interstellar Mapping and Acceleration Probe (IMAP), which launches later this year to study solar wind particle acceleration through the solar system and its interaction with the interstellar environment. A long journey SPHEREx spent years in development before its greenlight in 2019. NASA’s Jet Propulsion Laboratory managed the mission, enlisting BAE Systems to build the telescope and spacecraft bus, and finalizing it as the Los Angeles’s January wildfires threatened its campus. Scientists from 13 institutions in the U.S., South Korea, and Taiwan will analyze the resulting data, which CalTech’s Infrared Processing & Analysis Center will process and house, and the NASA/IPAC Infrared Science Archive will make publicly available. [Image: JPL] “I am so unbelievably excited to get my hands on those first images from SPHEREx,” says Korngut. “I’ve been working on this mission since 2012 as a young postdoc and the journey it’s taken from conceptual designs to here on the launcher is just so amazing.” Adds Viall, “All the PowerPoints are now worth it.” View the full article