Justin Joffe Author https://www.prdaily.com PR Daily - News for PR professionals Mon, 25 Nov 2024 20:41:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Breaking through in 2025: Microsoft and T-Mobile leaders share CommsWeek takeaways https://www.prdaily.com/breaking-through-in-2025-microsoft-and-t-mobile-leaders-share-commsweek-takeaways/ https://www.prdaily.com/breaking-through-in-2025-microsoft-and-t-mobile-leaders-share-commsweek-takeaways/#respond Wed, 27 Nov 2024 11:00:35 +0000 https://www.prdaily.com/?p=345256 Microsoft’s John Cirone and T-Mobile’s Tara Darrow recap the lessons that stuck out at Ragan’s Future of Communications Conference 2024. The challenges communicators face are growing more complex. Understanding how to address increasingly divided audiences, blurred lines between internal and external communications and integration of AI into workflows requires preparation, adaptation and the ability to […]

The post Breaking through in 2025: Microsoft and T-Mobile leaders share CommsWeek takeaways appeared first on PR Daily.

]]>
Microsoft’s John Cirone and T-Mobile’s Tara Darrow recap the lessons that stuck out at Ragan’s Future of Communications Conference 2024.

The challenges communicators face are growing more complex. Understanding how to address increasingly divided audiences, blurred lines between internal and external communications and integration of AI into workflows requires preparation, adaptation and the ability to build bridges.

These ideas were recurring themes at Ragan’s Future of Communications Conference, the flagship event of CommsWeek 2024.

During a webinar recapping takeaways from the event, Mike Prokopeak, director of learning and council content for Ragan’s Communications Leadership Council, spoke to two council members and winners of Ragan’s inaugural Vanguard AwardsJohn Cirone, senior director of global employee and executive communications at Microsoft, and Tara Darrow, vice president, corporate and financial communications, values and reputation and executive brand at T-Mobile — about the lessons that stuck with them.

Communications priorities in the new year

As organizations brace for 2025, communicators must focus on aligning with corporate priorities while embracing new tools and techniques. This means:

  • Aligning communications with core business goals. This will require balancing internal and external messaging priorities while making room to try new things and innovate alongside new products and org structures.
  • Dedicating time for experimentation with emerging tools like AI.
    • “It’s about making AI a daily habit,” said Cirone. “Carving out space to experiment with tools like Copilot allows us to uncover ways to work smarter while staying aligned with the company’s priorities.”
    • “AI streamlines the repetitive tasks so we can focus on higher value work that truly drives business outcomes,” Darrow agreed. “It takes the busy work off our plates.”
  •  Staying agile and anticipating external challenges.
    • Darrow’s focus at T-Mobile is guided by a three-year strategic plan that emphasizes agility to fulfill a dual mission of both driving and transforming the business.
    • Both leaders spoke to the urgency of being prepared for shifting regulatory environments and emerging social issues. “We need to be ready to engage where it matters most while staying true to our values as a company,” Darrow said.

Adapting to the ‘shattered glass’ media ecosystem

Both Darrow and Cirone agreed with the metaphor of today’s media landscape feeling like “shattered glass,” with news and information flowing from endless platforms, voices and nontraditional outlets like podcasts or Substack sites.

Navigating this will require:

  • Integrating traditional and emerging media platforms into a cohesive, holistic strategy.
    • Traditional PR methods like solely relying on press releases no longer cut it. Balancing an ever-expanding array of channels and platforms requires going where your intended audiences are most active in a way that feels like an authentic fit for the brand.
    • “News is coming from everywhere—TikTok, Instagram, podcasts, social media and traditional outlets,” Darrow said. “This fragmentation forces us to integrate across platforms, from influencers to customer voices, in ways we hadn’t before.”
  • Making sense of external messaging for internal audiences.
    • As employees look for internal messaging to find clarity amid the noise, communicators become the translators of this ecosystem—and how their organization exists within it.
    • “A decade ago, employees cited external sources as their most trusted information channels. Today, our internal channels dominate, which reflects a shift in how employees prioritize trusted communication from their organization,” said Cirone.

Addressing the internal-external overlap

Cirone’s point illustrates just one example where the line between internal and external communications blurs. When this happens, communicators must adapt their approaches to address changing employee demands. Employees can be your most vocal external stakeholders and move from advocate to activist pretty quickly when they feel unheard and unsupported—even amplifying internal messages on public platforms.

You can mitigate this by:

  • Creating a messaging strategy that aligns internal and external narratives.
    • “The internal world is the external world now, and vice versa,” said Darrow. “It’s critical to create cohesive messaging that reflects the nuances of both.”
  • Training your comms team to approach each challenge with a holistic mindset.
    • “Specialization can make it harder to see risks across the broader communication spectrum,” Cirone said. “We’re upskilling our teams to think holistically and consider multiple perspectives.”

Building bridges in a divided world

Communicators today serve as bridge builders, conveners and dot connectors who engage disparate and divided audiences through empathy and narrative.

This is made easier by:

  • Using storytelling as a tool to connect and unify.
    • Darrow believes that the power of words is strong enough to bring adversaries together. “Through storytelling, we help connect people across divides, shaping conversations in ways that resonate deeply with our audiences,” she said.
  • Developing frameworks that identify and evaluate strategic engagement opportunities.
    • T-Mobile’s “Lean Team” framework helps the comms team assess whether to lean into or out of conversations based on an established set of criteria.
  • Grounding your comms strategies in data.
    • Darrow emphasized that this framework is ultimately a data-driven exercise. “We rely on data to understand the value and risk of engagement, ensuring we’re present where it matters and silent when it’s best,” she explained.
    • Cirone agreed and explained that data-driven decision-making moves the comms function from a tactical to a strategic place. “Communicators need to show their impact, not just their value, by grounding strategies in data and aligning them with organizational goals,” he said.

The future is unwritten

As communicators prepare for an uncertain future, there’s certainty in building skilled, diverse teams with the ability to navigate change.

“The impact of communications lies in its ability to drive change and act as a trusted advisor to leadership,” said Cirone. “Focus on building teams that complement your strengths and amplify your goals.”

Darrow agreed, boldfacing the idea that seeking alignment across stakeholders and staying agile will keep comms in the mix.

“If your feet are planted, you’re not contributing,” she said. “You have to keep moving, shifting and evolving to stay relevant.”

Register now to access the full, free webinar here.

Darrow and Cirone are both members of Ragan’s Communications Leadership Council. Learn more about joining here. 

The post Breaking through in 2025: Microsoft and T-Mobile leaders share CommsWeek takeaways appeared first on PR Daily.

]]>
https://www.prdaily.com/breaking-through-in-2025-microsoft-and-t-mobile-leaders-share-commsweek-takeaways/feed/ 0
A closer look at Spirit Airlines’ bankruptcy comms https://www.prdaily.com/a-closer-look-at-spirit-airlines-bankruptcy-comms/ https://www.prdaily.com/a-closer-look-at-spirit-airlines-bankruptcy-comms/#respond Fri, 22 Nov 2024 11:00:45 +0000 https://www.prdaily.com/?p=345228 Spirit’s messages to travelers and investors about its Chapter 11 filing offer insights into effective change comms. Spirit Airlines filed for Chapter 11 bankruptcy protection on Monday after losing more than $2.2 billion since the start of the pandemic, failing to restructure its debt and unsuccessfully attempting to merge with JetBlue at the beginning of […]

The post A closer look at Spirit Airlines’ bankruptcy comms appeared first on PR Daily.

]]>
Spirit’s messages to travelers and investors about its Chapter 11 filing offer insights into effective change comms.

Spirit Airlines filed for Chapter 11 bankruptcy protection on Monday after losing more than $2.2 billion since the start of the pandemic, failing to restructure its debt and unsuccessfully attempting to merge with JetBlue at the beginning of the year. It expects the process to be completed by Q1 2025.

Positioning the move as a reorganization bankruptcy to provide Spirit with legal protections, the company published a press release framed as an “open letter” to travelers and a separate investor relations release.

Each announced an agreement with bondholders that the company claims will help it restructure debts and raise the funds it needs to operate during the process. Each offers solid examples for crafting bankruptcy comms, and change comms in general, delivered in a language and messaging style germane to each audience.

What the open letter got right

Spirit’s letter to travelers and customers, distributed by PR Newswire, is short but sweet.

It begins by stating the intention of the message: “We are writing to let you know about a proactive step Spirit has taken to position the company for success.” It then announces the agreement wit bondholders as a means to reduce total debt, give the company more financial flexibility and “accelerate investments providing Guests with enhanced travel experiences and greater value.” The opening also frames the bankruptcy as “prearranged” to hammer home the idea that this is a strategic plan and not a last resort (it’s both).

This opening effectively couches the financial news in language that general audiences can understand, then ties the changes back to things that matter to guests — how it affects their travel experience. Whether water will become free on future Spirit flights remains to be seen.

The letter then bolds and underlines the point it wants those scanning the message to take away: “The most important thing to know is that you can continue to book and fly now and in the future.”

This is followed by assurances that travelers can still use their tickets, credits and loyalty points as normal, join the airline’s loyalty program and expect the same level of customer service from Spirit.

The letter ends with a few more best practices:

  • It shares the estimated date of Q1 2025 when the process will be complete, an accountability play.
  • It alludes to other airlines that have navigated bankruptcy and emerged stronger (American Airlines an Delta filed after 9/11, but Spirit is the first airline to do this in a decade.) This makes Spirit seem like less of an outlier, even though their debt and case is extreme,
  • It offers a landing page to learn more about the company’s financial restructuring. This is a tried and true tactic for any change message—stick to the key points in the message, and direct interested audiences elsewhere to learn more.

“I applaud them for trying to communicate directly with their customers, reinforcing that they can book and fly now and in the future without disruption,” said Vested Managing Director Ted Birkhahn.

“However, they need to ensure they deliver on this promise because mass flight cancellations or service disruptions during this period put them at risk of breaking any remaining trust between the brand and the consumer.”

While Spirit’s open letter captured many best practices of change comms, it avoids some other questions. Birkhahn also pointed out that the statement doesn’t mention any strict adherence to safety standards during the bankruptcy proceedings—a concern on the minds of any traveler following Boeing’s recent crucible.

“When considering flying with an airline in bankruptcy, my main concerns are whether it might be distracted or understaffed, potentially compromising its ability to meet FAA standards, and whether it can maintain normal operations,” he added.

“I realize all airlines are under strict FAA oversight, but consumer perception is Spirit’s reality, and if consumers are fearful of flying the airline, they will likely book elsewhere.”

Glossing over your past mistakes and pretending they never happened is bad PR, while owning them and positioning a financial restructuring as an opportunity to rectify past operational failings is a chance to turn an opportunity into a cornerstone of future success.

How the IR release frames things differently

While the open letter had the boilerplate cautionary legal language in its forward-looking statement, the investor relations release goes into more specific terms using business and legal language.

Four takeaways are listed up top before the press release begins:

• The first says that “Flights, ticket sales, reservations and all other operations continue as normal,” expanding on the commitments in the open letter to include operations.

• The second notes that the restructuring agreement was signed “by a supermajority of Spirit’s bondholders”, explicitly noting that bondholders have agreed to the plan.

• The third defines the Chapter 11 proceedings as “voluntary” and says they have officially commenced “to implement the agreed deleveraging and recapitalization transactions”.

• The fourth gets into the financing details Spirit will receive from existing bondholders and specifically notes that vendors, aircraft lessors and “holders of secured aircraft indebtedness” will be “paid in the ordinary course and will not be impaired.”’

These points anticipate the most likely investor concerns and address them first — always a best practice when crafting business comms. They are consistent with the ideas in the open letter but go into deeper detail, which makes sense for the audience closely invested in business operations and performance.

This release also included the first indication of how Chapter 11 will affect employee compensation, claiming it will not impact team member wages or benefits “which are continuing to be paid and honored for those employed by Spirit”.

A statement from Spirit President and CEO Ted Christie closes the IR note, contextualizing what this news should mean for the company’s bottom line and ending by thanking his team.

What this means for employees

While Christie thanked the Spirit team and the IR release said that employee compensation and benefits would remain unaffected, the question of layoffs still looms. Spirit furloughed hundreds of pilots over the summer and into the fall after announcing pay raises for four executives in a July 8-K filing.

On the heels of the bankruptcy news, a story about Christie’s $2.5 million Florida home isn’t doing any favors for the company’s employer brand, either.

Spirit is at an inflection point—not just over how it communicates with unions, but with employees directly to educate them about what bankruptcy means for their role and business operations in the months ahead.

We don’t know how Spirit communicated this news with employees, and a request for comment from Spirit was not returned at the time of publication.

Cat Colella-Graham, internal comms lead and coach at Coaching for Communicators, believes that foundational change comms best practices can be applied at Spirit to mitigate internal confusion or backlash.

Those include:

  • Holding an all-hands meeting and following up with an email. “It’s important to share the what, why, and why it matters to employees first and fast,” reminds Colella-Graham. “To avoid any misinformation, follow up with an email that recaps the facts, offers a resource if you have questions, and a reminder to direct press inquiries to the appropriate media rep. The law firm assigned to the case may require this for compliance.”
  • An intranet FAQ. This should include:
    • The roles that are immediately impacted, if any.
    • What employees can do to prepare for next steps.
    • Any resources, support or professional services the company offers employees to help the process.
    • A commitment to communication, including who they can go to with additional questions.
    • Regular updates ahead of developments hitting the news. Finding out bad news about your organization from external sources before hearing it internally is one of the biggest change comms sins you can make— it corrodes trust and can transform employees from advocates to activists.

Colella-Graham also sees this as an opportunity for Spirit’s leaders to demonstrate humility, empathy and consideration for how difficult it is to process this news so close to the holidays.

“Many employees will be essential in this deal,” she said. “If leaders want to retain those essential team members to work the best bankruptcy deal they can including a sale, merger or other administrative remedy, they need to walk shoulder to shoulder with the team.”

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Follow him on LinkedIn.’

The post A closer look at Spirit Airlines’ bankruptcy comms appeared first on PR Daily.

]]>
https://www.prdaily.com/a-closer-look-at-spirit-airlines-bankruptcy-comms/feed/ 0
What The Onion’s purchase of InfoWars can teach us about executive comms https://www.prdaily.com/what-the-onions-purchase-of-infowars-can-teach-us-about-executive-comms/ https://www.prdaily.com/what-the-onions-purchase-of-infowars-can-teach-us-about-executive-comms/#respond Mon, 18 Nov 2024 11:24:32 +0000 https://www.prdaily.com/?p=345171 “America’s Finest News Source” had an unexpected executive comms joke up its sleeve. Though the premise of this story falls in line with something satirical publication The Onion might publish, this news is real: The Onion has purchased Alex Jones’ notorious right-wing conspiracy content site InfoWars, with plans of relaunching it next year as a […]

The post What The Onion’s purchase of InfoWars can teach us about executive comms appeared first on PR Daily.

]]>
“America’s Finest News Source” had an unexpected executive comms joke up its sleeve.

Though the premise of this story falls in line with something satirical publication The Onion might publish, this news is real: The Onion has purchased Alex Jones’ notorious right-wing conspiracy content site InfoWars, with plans of relaunching it next year as a satire of its former self.

According to a report from the AP, The Onion won the rights to Infowars in a bankruptcy auction resulting from the $1 billion ruling against Jones for defaming family members of victims of the Sandy Hook school shooting. The purchase was reportedly done with the blessing of the Sandy Hook families, and includes plans for the nonprofit Everytown for Gun Safety will advertise on the new, joke version of Infowars.

In typical Onion fashion, the satire site confirmed the purchase in a blog post from the totally real and not-at-all fictional CEO of The Onion’s parent company Global Tetrahedron, “Bryce P. Tetraeder”, who outlined why the move was made:

Founded in 1999 on the heels of the Satanic “panic” and growing steadily ever since, InfoWars has distinguished itself as an invaluable tool for brainwashing and controlling the masses. With a shrewd mix of delusional paranoia and dubious anti-aging nutrition hacks, they strive to make life both scarier and longer for everyone, a commendable goal. They are a true unicorn, capable of simultaneously inspiring public support for billionaires and stoking outrage at an inept federal state that can assassinate JFK but can’t even put a man on the Moon.

The purchase includes the rights to the Infowars video archive, social media accounts, website, and studio in Austin, Texas.

Why it’s important

Whenever a big shift in the media landscape happens, communicators take notice. But one like this is particularly notable.

The Onion, long known for lampooning people and symbols of power in American society, simultaneously made a joke out of Jones and his vitriolic content mill by shutting it down and providing the proceeds to the Sandy Hook families that he so disgustingly disparaged. But The Onion took it even a step further with its plans to relaunch Infowars as a satire of its former self.

Here at Ragan, we write a lot about sticking to your organizational values. What that means obviously differs pretty greatly by company. But seeing The Onion do right by both the Sandy Hook families in making this purchase AND nailing a pretty great joke? That lines up on two levels. Even the blog post The Onion released from it’s “CEO” has the fingerprints of “America’s Finest News Source” all over it.

When big change happens, stick to your morals and your company values when you’re communicating about it. You stand a pretty good chance of getting it right that way.

The real executive comms play

While Bryce P. Tetraeder doesn’t exist, Ben Collins, the CEO of The Onion’s parent company, Global Tetrahedron, very much does. Collins made the media rounds yesterday, telling The New York Times:

“We thought this would be a hilarious joke,” Mr. Collins said. “This is going to be our answer to this no-guardrails world where there are no gatekeepers and everything’s kind of insane.”

Mr. Collins said that the families of the victims were supportive of The Onion’s bid because it would put an end to Mr. Jones’s control over the site, which has been a fount of misinformation for years. He said they were also supportive of using humor as a tool for raising awareness about gun violence in America.

“They’re all human beings with senses of humor who want fun things to happen and want good things to take place in their lives,” Mr. Collins said. “They want to be part of something good and positive too.”

With a fake CEO’s inflated satirical message complementing Collins’ real and immediate explanation of strategy, The Onion boosted its purpose and brand affinity in one satiric swoop.

Building off the values espoused ironically in the publication’s most popular and recurring fake headline, “‘No Way to Prevent This,’ Says Only Nation Where This Regularly Happens”, this is an example of an executive comms play where form follows function, function ties back to purpose, and brand identity is harnessed for good.

Sean Devlin is an editor at Ragan Communications. In his spare time he enjoys Philly sports and hosting trivia.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Follow him on LinkedIn.

The post What The Onion’s purchase of InfoWars can teach us about executive comms appeared first on PR Daily.

]]>
https://www.prdaily.com/what-the-onions-purchase-of-infowars-can-teach-us-about-executive-comms/feed/ 0
Top takeaways from Ragan’s Future of Communications Conference 2024 https://www.prdaily.com/top-takeaways-from-ragans-future-of-communications-conference-2024/ https://www.prdaily.com/top-takeaways-from-ragans-future-of-communications-conference-2024/#respond Mon, 18 Nov 2024 10:18:22 +0000 https://www.prdaily.com/?p=345169 Wisdom from all-star speakers for the road ahead. The future of communications will require proactive crisis communications management, personalized messages for individual employees and, above all, a willingness to bridge differences and reach across the political aisle. These were the overriding themes of Ragan’s Future of Communications Conference, held last week in Austin, Texas. More […]

The post Top takeaways from Ragan’s Future of Communications Conference 2024 appeared first on PR Daily.

]]>
Wisdom from all-star speakers for the road ahead.

The future of communications will require proactive crisis communications management, personalized messages for individual employees and, above all, a willingness to bridge differences and reach across the political aisle.

These were the overriding themes of Ragan’s Future of Communications Conference, held last week in Austin, Texas. More than 700 communicators came together to share their challenges and triumphs and to prepare for the year ahead.

These were some of the takeaways you should know as we head into 2025. For more insights, join us Nov. 19 for a FREE webinar recapping what you missed.

On proactive crisis management

Taking place the week after Donald Trump won the election, the conference was rich with discussion about how and what proactive crisis management will look like over the next four years.

“You shouldn’t respond to every single (political issue) because it goes to an issue of authenticity,” Elizabeth Monteleone, chief legal officer of Bumble said.But on those that we’ve committed to, regardless of what the political landscape is going to be, we’re going to continue to show up. That consistency builds trust. It builds authenticity in your employee base and your consumer base.”

Monteleone added that Bumble’s aim has been to focus on “policies, not politics.”

With unionization efforts on the rise, Beth R. Archer, director of corporate communications at Constellation, explained how the company’s strong relationship with unions across the country is supported year-round. Each policy change, development, and employee award is shared with unions well in advance.

“We create contingency plans that address every scenario, and our tone we always take with that is positive and forward-looking,” Archer explained. “We’re going to be working with these folks and want to be sure that we don’t erode that trust.”

On personalizing messages for employees

We continue to see internal communicators put their marketing hats on to segment their employee populations and deliver personalized messaging strategies that make “meet them where they are” more than a platitude of jargon.

“As the comms landscape changes the future comes in, customizing communications seamlessly for the deskless population is going to look different,”  said Andres “Dre” Muñiz, associate director of global manufacturing & quality communications, at Eli Lilly and Company. The core constant is just treating them like people.”

Taking a people-first approach should also be reflected in the leaders you select to speak to your employee population. Effectively personalizing employee messages also means building variety into your company meetings that platforms those doing the work who don’t often get the spotlight, and centers each update around the most timely and actionable developments.

“The idea of a quarterly meeting that follows the same exact format with the same speakers should be sunsetted,” said Christina Furtado, director of AI communications at Dell Technologies. “You have to be flexible in how your executive addresses their team and who they pull in to help them do the storytelling.”

If segmenting your employee population feels daunting, consider how AI can help.

“We started taking our (engagement) data and running it through AI to ask it for trends,” explained Brandi Chionsini, senior manager of internal communications, at LegalZoom.  “Anytime you do a survey, it needs to be immediate and expedient. AI is helping us analyze large groups of data quickly and efficiently so we’re able to turn that around (to let employees know we’re listening) a lot faster.”

On bridging differences to reach across the aisle.

Whether your workforce is red, blue or purple, Archer urged audiences to approach politically-charged conversations “with respectful curiosity,” a phrase she learned from one of Constellation’s attorneys.

“Less words like diversity, and more words like belonging,”  said Joanna Piacenza, vice president of thought leadership, Gravity Research. Piacenza’s point underscores the power that the words we use can reframe the work we’re doing to be less incendiary or politically-charged–while still making room for the work to continue.

Alise Marshall, senior director of corporate affairs and impact at  Pinterest, told the audience in her session that times of polarization are an opportunity to reignite and reactivate shared values.

“Regardless of that polarization that we see across the electorate, folks still want the same basic things out of this life,” she said. “They want to be able to go to work in a dignified manner and role. They want to be able to give back to their communities and to those loved ones.”

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Follow him on LinkedIn.

The post Top takeaways from Ragan’s Future of Communications Conference 2024 appeared first on PR Daily.

]]>
https://www.prdaily.com/top-takeaways-from-ragans-future-of-communications-conference-2024/feed/ 0
Mastering AI: How to craft persuasive and productive prompts https://www.prdaily.com/mastering-ai-how-to-craft-persuasive-and-productive-prompts/ https://www.prdaily.com/mastering-ai-how-to-craft-persuasive-and-productive-prompts/#respond Thu, 14 Nov 2024 11:30:46 +0000 https://www.prdaily.com/?p=345152 Levar Cooper from Lake County, Florida Government kicked off Ragan’s Future of communications Conference with gen AI prompts you can use today.   Tools are only as helpful as how you use them, and generative AI tools are no different — the outputs of tools like ChatGPT are only as useful as the prompts you […]

The post Mastering AI: How to craft persuasive and productive prompts appeared first on PR Daily.

]]>
Levar Cooper from Lake County, Florida Government kicked off Ragan’s Future of communications Conference with gen AI prompts you can use today.  

Tools are only as helpful as how you use them, and generative AI tools are no different — the outputs of tools like ChatGPT are only as useful as the prompts you feed them.

Levar Cooper, communications director at Lake County Government in Florida, is optimistic about the future of communications and how automation will inform it.

“I’m on a mission to help as many people benefit from the power of AI as possible,” he told attendees Wednesday during his opening workshop at Ragan’s Future of Communications Conference.

After Cooper acknowledged the current limitations of AI, including cognitive biases, adoption barriers, and policy and regulation proposals that keep people from diving in, he shared several AI prompting tips to open Ragan’s flagship CommsWeek event.

Here’s what stuck out.

Selecting the right tools

Cooper recommends communicators resist the shiny allure of technology itself to consider how these tools actually meet their needs.

“It’s not enough just to use AI — you’ve got to have a strategy behind it,” Cooper said.

Considerations should include:

  • Business alignment. This means ensuring that the tool aligns with and supports your organization’s strategic goals.​
  • Data privacy and compliance. You should always confirm the tool meets data privacy and security standards to protect sensitive information from the outset.​
  • User experience and integration. Assessing each tool’s ability to integrate smoothly with current workflows and its ease of use will encourage buy-in across functions and move you along the adoption curve. “We often think of user experience as customer experience, but it’s really everyone at your organization who has to use it,” said Cooper.
  • Scalability and flexibility. Make sure to choose a tool that can scale with your organization and adapt to future needs. This may mean that it includes some features and functions you aren’t ready for yet, but can work toward implementing down the line.

Prompts to scale use and meet content needs

Cooper explained what you need to give AI to be successful. “When talking about the prompting identity, I give it an assignment and then give it context,” he said.

These are the prompts he’s applied successfully for each use case:

  1. Content planning. Please act as my content coordinator and create a December social media calendar for Lake County Fire Rescue’s Facebook page that leverages data-supported best practices. Incorporate national holidays and area events where practical.​”
  2. Content drafting. “Please act as my political consultant and draft a speech for the groundbreaking of a new Leslie B. Knope community center in Pawnee, Indiana in the voice of Mayor Gergich.” This is an example of how AI can reference broader events and culture, in this instance, the popular show “Parks & Recreation”.​
  3. Event planning. Please act as my event coordinator and create an event plan using the framework of the attached document for the grand opening of the new Braised Bison Bistro location in Denver, Colorado.” Cooper said that “uploading that framework allows AI to adapt to my framework, and not the other way around.”​

Working with custom prompts

Custom prompts allow you to harness the output of these tools for more strategic purposes.

“Many of these platforms allow for custom prompts, which really helps supercharge what you’re doing in a repeatable context,” Cooper said, but urged communicators to embrace the DRY mantra — that’s “don’t repeat yourself”— as a reminder to ensure your workflow is dynamic and iterative.

His tips for custom prompts include:

  • Define objectives and context. Cooper recommends clarifying the purpose of the prompt and providing relevant context such the target audience, tone and format.
  • Be specific and test iteratively. Giving your tool precise instructions and refine the prompt based on trial and error to improve results over time. The more you spell these details out, the better your tool learns them.
  • Use examples and boundaries. Including examples and specifying output constraints (those can also be tone, style or format) will help you guide the AI response to more effective outputs.
  • Break down complex tasks. For multi-phase projects, you can chain prompts in stages to build structured, aligned outputs for each part of the task.​ This will minimize the likelihood of your tool getting confused and allow you to train it at multiple points in the project.

Prompts to optimize engagement

Cooper also shared ways to get Claude to analyze data and provide insights, including:

  • Audience insights. “Please act as my strategic communications consultant and provide a sentiment analysis in the form of a report on posts related to debris collection following Hurricane Milton and include trend insights beginning on Oct. 10.​”
  • Platform insights. “Please act as a business analyst and make recommendations on the optimal times for posting content based on the provided data.​” Cooper said this business inquiry is especially powerful because it’s giving you insights that demystify algorithms and tell you why things aren’t working as well.

Cooper went deeper into using AI to craft compelling visuals, train systems on executive voice, engage internal stakeholders to move them along the adoption curve and more during his full workshop, which will be available in the coming weeks to Ragan Training members. Subscribe today!

Keep your eyes peeled from more coverage from #CommsWeek2024

The post Mastering AI: How to craft persuasive and productive prompts appeared first on PR Daily.

]]>
https://www.prdaily.com/mastering-ai-how-to-craft-persuasive-and-productive-prompts/feed/ 0
An employee communications template for addressing post-election unease https://www.prdaily.com/employee-communications-template-for-addressing-post-election-unease/ https://www.prdaily.com/employee-communications-template-for-addressing-post-election-unease/#respond Wed, 06 Nov 2024 12:08:18 +0000 https://www.prdaily.com/?p=345075 The anatomy of a message that acknowledges uncertainty, provides support, and ties back to your core mission. As the final results of the 2024 US Presidential Election came in, a seeming win for Trump of the most contentious American election yet means that roughly half of voters are disappointed. Whether your workforce skews blue, red […]

The post An employee communications template for addressing post-election unease appeared first on PR Daily.

]]>
The anatomy of a message that acknowledges uncertainty, provides support, and ties back to your core mission.

As the final results of the 2024 US Presidential Election came in, a seeming win for Trump of the most contentious American election yet means that roughly half of voters are disappointed. Whether your workforce skews blue, red or purple, all employees will share a sense of unease, anxiety and stress until the dust settles. Many will for the foreseeable future, too.

While some leaders choose to stay silent during this period, those who understand how to communicate in times of ambiguity reclaim an opportunity to strengthen trust with employees while reinforcing values and redirecting focus to their organization’s big picture.

Integral’s latest research found that the younger employees are, the more they want to express their political views in the workplace. It also found that senior leaders are more comfortable having political dialogue than other levels of managers—and more concerned about political tension, too.

Those insights suggest an opportunity for  communicators and leaders alike to set expectations for respectful political discourse, acknowledge, align and assure employees amid uncertainty, and unite the workforce  around a shared mission.

“This election is a historic moment for businesses and society alike,” Golin Global President of Corporate Affairs Megan Noel told Ragan.

“Communicators considering a post-election communication should be prepared for heightened emotions and various reactions to the outcome. Avoid speculating about the potential impact of the election results, especially prior to any official decisions being made.”

Noel recommends that all post-election communications reinforce five things:

  1. The importance of civic engagement and respect for the democratic process. While that’s normally been positioned ahead of election day, keeping that message alive matters now more than ever.
  2. Commitment to your purpose and values “that guide [your company’s] behaviors and actions, such as integrity, respect, care, and inclusivity.”
  3. Support for employees, customers and communities regardless of political affiliation or stance. This should explicitly mention “the permission to not engage in political discussion, especially during the immediate days following the election.”
  4. Company benefits that support mental and physical wellbeing, “including access to resources and tools as well as inclusion networks/ERGs gatherings.”
  5. Safety and security measures in place at any office locations close to polling places, demonstration sites or campaign HQs. “This will be important should demonstrations or protests break out.”

Reinforcing these messages consistently also requires tweaking them as employee sentiment evolves. “Communicators should continuously monitor conversations and dialogues that may impact their companies and brands and use that information to correct, adjust or inform key audiences as needed,” Noel added.

Putting it all together

During Ragan’s Internal Communications Conference at Microsoft HQ in Redmond, WA last month, Microsoft Director of Employee & Executive Communications and Employer Brand Amy Morris, and Senior Manager of Communications and Reputation Management Sarah Shahrabani, showed how Microsoft’s values plug into a communication framework to help the matrixed comms function manage political discourse across internal channels.

They also emphasized the importance of having messages of unity come from leaders as another mechanism for reinforcing trust, while Morris explained how her team prepares leaders with pre-vetted talking points that emphasize Microsoft’s values and equip the leaders to address timely, topical issues as they emerge.

Similarly, Noel’s recommendations serve as smart reputational guideposts for any leader, or communicator crafting messages on a leader’s behalf, to follow.

Applying her five post-election points to an employee message looks something like this:

Got any other tips for executive messages that acknowledge, align and assure employees during moments of unease? Let us know in the comments below.

Join us next week for post-election therapy as we look to 2025 and beyond at Ragan’s Future of Communications Conference.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

The post An employee communications template for addressing post-election unease appeared first on PR Daily.

]]>
https://www.prdaily.com/employee-communications-template-for-addressing-post-election-unease/feed/ 0
6 ways communicators can influence the AI budgeting process https://www.prdaily.com/6-ways-communicators-can-influence-the-ai-budgeting-process/ https://www.prdaily.com/6-ways-communicators-can-influence-the-ai-budgeting-process/#respond Mon, 07 Oct 2024 10:00:43 +0000 https://www.prdaily.com/?p=344629 Your team will be using AI. Here’s how to take some control. This is part three in Ragan’s series on budgeting for communicators. Read part one here and  part two here. When you gain influence, you secure budget. Similarly, the rapidly-accelerating applications of AI open unrealized opportunities for communicators to influence the safe, responsible and […]

The post 6 ways communicators can influence the AI budgeting process appeared first on PR Daily.

]]>
Your team will be using AI. Here’s how to take some control.

This is part three in Ragan’s series on budgeting for communicators. Read part one here and  part two here.

When you gain influence, you secure budget. Similarly, the rapidly-accelerating applications of AI open unrealized opportunities for communicators to influence the safe, responsible and practical implementation of the tech across business lines.

AI will benefit those closest to the business, and becoming closer to the business is the best way to gain influence. To this end, it stands to reason that communicators should seek to allocate spend for operational AI tools.

This requires documenting and demonstrating impact, which is hard to do with new tech when you lack benchmarks and baselines. In the absence of this, Catherine Richards, founder of Expera Consulting and AI coach to Ragan Communications Leadership Council members, suggests focusing on the strengths and differentiations that are unique to comms.

“The differentiation is a trust catalyst,” Richards said, “because communications generally leads the relationships with investors, analysts and media.”

“Relationships are your secret sauce,” she continued. “Other functions don’t have those, and so communicators can build that trust. You have to be transparent there. You guard the reputation. Many times you are the navigator for the ethics.  Lean in there.”

Ragan and Ruder Finn’s recent survey of AI in internal communications underscores how important trust is to scaling AI implementation across the business —  50% of senior communications leaders said data privacy and fake news were their top concerns for working with AI, while 48% of the C-suite cited resistance from key stakeholders as a barrier.

Other top concerns included the idea that communication overload would result in misinformation (41%), the loss of personal touch and humanity in communication (37%) and the lack of internal expertise and resources (35%).

With trust as your guide, it’s easier to secure more influence over setting budget for operational AI tools with strategy that weaves in stakeholders across the business, prioritizing transparency while connecting the value of these tools back to organizational goals.

Here’s how to start making the case:

1. Show ROI and business impact.

EY’s recent study on AI investments found that senior leaders at organizations investing in AI are seeing tangible results across the business, with those investing seeing the most positive ROI around operational efficiencies (77%), employee productivity (74%) and customer satisfaction (72%).

These are all metrics that communicators can, and should, track.

Quantifying the benefits of AI for communications starts with establishing a baseline (and not comparing yourself to industry benchmarks just yet). Start by documenting how GenAI content tools improve productivity and tracking time saved on mundane manual tasks.

When testing AI tools to better target specific employee or stakeholder segments and customize your outreach, measure the open and click-through rates of those AI-assisted messages against similar messages sent before you started using the tool.

With these numbers in hand, create a simple correlation model to show how your tools have positively impacted KPIs including engagement rates, employee sentiment and resonance of executive messaging while saving the comms team time.

A correlation model between time saved and the cost of that time, which breaks down everyone’s salary to hourly rates and compares that to hours gained, may be worthwhile too.

Being open with your team about what you’re tracking and positioning this correlation as an accountability measure to grow and scale, will center the exercise around trust.

 2. Align your metrics and models to business objectives.

Aligning your content and editing efficiencies with broader organizational goals like revenue growth connects your efforts to the business easier when you can explain how that time saved is being applied to future projects.

While partnering with IT, Finance and Marketing can help you allocate comms budget for cross-departmental projects and collaborations, it can also help you pull metrics around customer satisfaction and brand positioning that may not already live on your internal measurement dashboard.

Turning other departments into joint advocates for AI investments requires explaining how the tools you want complement technologies other teams are using and improve workflow automation across the business.  In turn, this grounds the relationship in trust and creates mutual accountability.

3. Present your strategy.

A clear and detailed implementation plan that includes guidelines spells out every team’s rules of play and creates visibility of ownership along the way.

Your plan should include specific use cases, timelines and measurable outcomes that each functional owner is responsible for tracking.

Your plan can also demystify the AI budget process by doubling as a roadmap to show how incremental investment can lead to long-term results.

Positioning a move away from free tools to secure AI tools is a solid first step that you can position around risk mitigation. Training on the investment secured is a logical second step.

Why is training crucial here?  Our survey with Ruder Finn found that around half (53%) of C-suiters aged 43 and under said they were satisfied with the AI training they received compared with 42% of C-suiters aged 44 and over. But a much wider training gap exists between the C-suiters surveyed and other communicators.

This gap emphasizes the need for more personalized training resources, which can build trust and scale implementation at the same time.

PwC leads the way here with its innovative training exercises, including a feedback loop between comms and product team to ensure the process is iterative and collaborative.

4. Educate leaders and decision-makers.

When comms takes an early adopter mentality to research and responsibly experiment with emerging AI tech, it’s easier to educate internal stakeholders on how AI works and its tangible benefits to the business.

Be prepared to answer questions about costs, security and integration by sharing case studies from other communications leads who have found success at scale. Visit Orlando’s Adeta Gayah scaled her social media team’s operations with automated image tagging and GenAI research ideation, while PwC’s Gabrielle K. Too-A-Foo uses the firm’s tools to process large data sets and standardize SEO procedures.

Taking an educational approach by pointing to case studies not only reinforces trust — it also positions you as a leader in the process.

“That leadership voice can come from anywhere in the organization, and it’s somebody who has courage, who’s willing to be vulnerable and say, ‘I’m gonna test this out. I’m gonna take a risk,” Richards said.

5. Launch pilot programs to document quick wins.

After aligning with leadership expectations, propose pilot programs that can test these tools with a small cohort and designated owners before expanding them out to wider teams.

These pilots should be focused on producing quick, measurable results. During her time at VMWare, Richards collaborated with engineers to document marketing use cases while working with GenAI tools Jasper and Writer.

Hotwire Global gamified the pilot process by challenging more than 400 employees to create custom GPTs and empowering all functions to pilot their own use cases in the process.

“We received many of awesome, just mind-blowing examples that we never even thought of, from hilarious to very useful,” Anol Bhattcharya, managing director, marketing service: APAC for Hotwire told Ragan.

“[This includes] some awesome internal process development tools, some of them client-facing, which we are developing further now. It’s not only the AI — any   comms and marketing agency’s innovation should look like this: give them the tools, teach them basics and get out of the way, rather than trying to mold it too much.”

6. Share industry trends and competitor insights.

It’s often said that comparison is the thief of joy, but the organizations that implement and scale AI responsibly will gain an advantage over their industry competitors. To that end, it’s crucial to emphasize where and how AI is being implemented in communication strategies across your industry.

A recent CNBC Technology Executive Council bi-annual survey found that, among companies spending on AI, “roughly four times as many are investing in employee-facing AI projects rather than customer apps.”

Meanwhile, Ragan and Ruder Finn’s survey shows what industries are using AI the most, with the aerospace, aviation and transportation industry reporting the highest daily use.


Just under half of respondents surveyed in the manufacturing and technology industry (49%) are using AI daily—less than those in the education, government and nonprofit spaces. Unsurprisingly, heavily regulated industries like healthcare and finance have the smallest adoption rates

Staying up on our research on AI in communications, and other fields, helps you point toward broader trends in the communication space that position comms as forward-thinking innovators.

In turn, this positions your proposed investments as less of a luxury and more of a necessity.

 Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

Additional resources on securing comms budgets, including our recently released budget report, are available exclusively to members of Ragan’s Communications Leadership Council. Learn more about joining here.

The post 6 ways communicators can influence the AI budgeting process appeared first on PR Daily.

]]>
https://www.prdaily.com/6-ways-communicators-can-influence-the-ai-budgeting-process/feed/ 0
Advocating for comms budget in ‘the land of unfunded mandates’ https://www.prdaily.com/advocating-for-comms-budget-in-the-land-of-unfunded-mandates/ https://www.prdaily.com/advocating-for-comms-budget-in-the-land-of-unfunded-mandates/#respond Thu, 26 Sep 2024 10:00:15 +0000 https://www.prdaily.com/?p=344493 With budget season upon us, communications leaders must get creative in making their case. With budget season upon us, communications leaders must get creative in making the case for securing the resources they need. Because comms is often the coordinator and convener of cross-departmental campaigns and initiatives, it often means securing the resources required to […]

The post Advocating for comms budget in ‘the land of unfunded mandates’ appeared first on PR Daily.

]]>
With budget season upon us, communications leaders must get creative in making their case.

With budget season upon us, communications leaders must get creative in making the case for securing the resources they need.

Because comms is often the coordinator and convener of cross-departmental campaigns and initiatives, it often means securing the resources required to do quality work is contingent on others who don’t sit at the same crossroads.

“In my experience, comms can be the land of ‘unfunded mandates’,” said Shannon Iwaniuk, a senior communications leader at a global life sciences company. “We are often pulled into supporting events, activities and leaders/corporate initiatives that haven’t been expressly on the plan or effectively accounted for in the budget. This is especially true for teams that are forming or growing.”

“Communications is often relegated to scrounging leftover cookie crumbs from the office budget party,” agrees communications leader and Ragan Advisory Board member Amanda Ponzar, adding that most comms leaders are understaffed, stuck with small budgets or see their budgets first to be cut because leadership doesn’t understand the value.

Iwaniuk sees this as a charge for communicators to think critically on what is needed to communicate effectively and consistently.

“As comms pros, we’re masters of pulling the rabbit out of the hat and making the impossible happen through sheer grit and will,” she continued. “That’s fine for a first time or an emergent situation, but the discipline comes when we do an effective and honest after-action report, when we plan for how to replicate and optimize for the future.”

Connecting needs to goals through correlation modeling

Addressing this starts with understanding your organization’s goals, then demonstrating how you’re helping meet those goals in measurable, tangible ways.

This process is the same whether you’re working in internal or external communications, PR or marketing.  But what those ways will depend on your business and industry.

For nonprofits and smaller organizations, it’s almost always related to revenue or fundraising,” Ponzar said, “though many leaders love being featured in media articles. Seven or eight years ago, after our CEO was interviewed on the front page of USA Today, he asked me how much money it raised…and the answer was nothing…so I had to rethink everything.”

She began measuring ROI by looking at marketing/fundraising correlation modeling.

“I could show fundraising increased around the days we ran certain email marketing or social media campaigns, thus justifying the small investment in marketing/comms,” explained Ponzar.  “Last year, the funds raised in one large campaign increased significantly enough that the increase alone covered the marketing investment three times over—and the total raised was much higher than that.”

Asking for what you need

Ponzar used to receive an allocated budget, but for the past few years she’s instead been building detailed budgets in an excel spreadsheet a year out to make the case for investment.

This exercise underscores how “much of comms is related to keeping the communication lights on,” she said, listing a series of peripheral investments like website hosting sites and widgets, graphic design software, social media scheduling platforms, monitoring tools and more.

“This isn’t usually advancing communication initiatives. It’s just the building blocks.”

With today’s algorithms and competitive landscape, Ponzar found that paid media budget was also a necessary part of most initiatives. This required her to ask for a paid social budget or a satellite media tour.

“For each project, I’d build a comms plan or menu of options showing how we could use our owned channels—the ‘free’ social media, website, newsletter/email, etc—but also including all the paid options and recommending funding for those pieces.”

Iwaniuk adds that effective measurement is not just about the number of stories or posts, but also about demonstrating behavioral change.

“Have the courage to ask for what you need to deliver in a way that adds value to your organization and its leaders, supports the business and builds engagement and culture,” said Iwaniuk.

Asking for budget through this lens empowers you to draw a correlation between employee experience and culture, engagement and retention— an effective way to connect brand reputation back to tangible outcomes like productivity and retention.

Collaborating with other functions

It’s no secret that communicators are often uncomfortable with talking about numbers and budgets. This is when it might feel easier to outsource month-to-month budget needs to an administrative partner. But partnering with them instead will educate you on how they think and strengthen the likelihood of earning their support.

Ponzar advises communicators to establish a close rapport with their finance partners by knowing their numbers in advance of establishing a close rapport, even if it means seeking out business fluency resources online.

“I started by panhandling, asking my peers and other department heads who valued the work my team was leading to help fund us,” she said. One-on-one calls were most effective.

“When they saw how small our budget was and admitted they needed our help to achieve their goals as we’re interdependent, they almost had no choice but to fund our team’s work if they wanted publicity, promotions, advertising, etc.,” Ponzar continued.

Iwaniuk emphasizes that building alliances to get the funding needed begins with having a conversation.

“This means making the case at the executive level for a communications line item to be included in every new proposal, contract, memorandum of understanding (MOU) and project,” she said. “Otherwise, Comms is saddled with unrealistic asks, without the resources required to promote the project or achieve organizational, let alone department, goals.”

Iwaniuk draws a parallel between asking for budget and HR or Finance departments adding a percentage to an employee’s salary to cover fringe benefits. Positioning your asks not as nice-to-haves or extra, but as providing a more accurate view of the total cost, normalizes the idea that communications must be included in your organization’s goals.

“Start with your allies who value and understand the work you do,” she said. “Over time, as you measure and continue to report your results, those allies –and your funding — should grow.”

Additional resources on securing comms budgets, including our forthcoming budget report, are available exclusively to members of Ragan’s Communications Leadership Council. Learn more about joining here.

 

The post Advocating for comms budget in ‘the land of unfunded mandates’ appeared first on PR Daily.

]]>
https://www.prdaily.com/advocating-for-comms-budget-in-the-land-of-unfunded-mandates/feed/ 0
Closing the AI gap in internal communications https://www.prdaily.com/closing-the-ai-gap-in-internal-communications-between-buzz-and-actual-use/ https://www.prdaily.com/closing-the-ai-gap-in-internal-communications-between-buzz-and-actual-use/#respond Tue, 24 Sep 2024 14:00:41 +0000 https://www.prdaily.com/?p=344467 Ruder Finn and Ragan’s study, “The Great AI Divide in Internal Communications” identifies an AI implementation gap between priorities and adoption rates.  This past spring, Ragan partnered with Ruder Finn’s internal communications arm, rf.engage to learn how communicators implement AI, and how they plan to use the tech to advance their internal communications work in […]

The post Closing the AI gap in internal communications appeared first on PR Daily.

]]>
Ruder Finn and Ragan’s study, “The Great AI Divide in Internal Communications” identifies an AI implementation gap between priorities and adoption rates. 

This past spring, Ragan partnered with Ruder Finn’s internal communications arm, rf.engage to learn how communicators implement AI, and how they plan to use the tech to advance their internal communications work in the future.

Ruder Finn and Ragan’s “The Great Divide in Internal Communications” report surveyed communicators in North America and the U.K. across all levels of seniority and a vast range of industries. The results identify clear gaps between how AI is perceived and its application to internal comms efforts.

“Change of this magnitude is not straightforward, so it’s no surprise that gaps are appearing as organizations come to grips with how these technologies can deliver transformational benefits,” said Ruder Finn CEO Kathy Blooomgarden. “The key to success is to remember that any business solution must bring people along, underpinned by communications, and be linked directly to thoughtful integration within existing ways of working.”

Hearing the AI triumphs and challenges in the Ragan community has taught us that the opportunities to become AI champions are vast—opening new pathways for communicators to serve as strategic advisors who mitigate risk by crafting governance policies and setting guidelines, holding up the communications potential of this tech to spread the influence of comms across the business.

A closer look at the largest gaps reveals where, and how, comms can secure that influence.

Understanding the gap between priorities and usage

Ragan and Ruder Finn’s research found that communicators recognize AI’s potential but lag in implementing it. On average, the report found a 16% difference between top internal comms priorities and the extent to which AI is used for those priorities.

While 57% surveyed consider executive messaging and positioning a top priority, just 34% are using AI tools to streamline their exec comms. Communicators can train a secure generative AI tool like ChatGPT4 to write in the style of their executives by providing examples of past messages, describing attributes of the executive’s voice such as tone, formality and sentence structure, and even spelling out words, phrases and language to avoid. Vetting any drafts with relevant stakeholders, including the executive team and counsel, will inspire further confidence to scale this process and close the gap.

As exec comms is an underutilized AI use case, overcommunicating the drafting and editing processes also builds trust between comms and senior leaders to remind them of the human element and EQ required for the messages to resonate.

Similarly, 56% of communicators consider employee engagement a top priority, but only 39% use AI to assist with employee engagement. While that implementation rate is slightly higher than with executive communications, a 17% gap still exists.

A genAI tool can also help draft internal newsletters, memos and announcements consistent in an agreed-upon brand voice. This can even be harnessed to personalize onboarding materials for new employees that tailor key information about policies and values to each hire’s specific role.

AI-powered sentiment analysis tools, meanwhile, can interpret open-ended pulse survey answers or social sentiment from intranet posts, analyzing the language to craft a summary of how employees feel about a new change or initiative.

Whether comms sit under the HR function or not, championing these use cases positions with an early adopter mindset positions communications in an advisory capacity that strengthens trust to close the gap.

Of course, this is contingent on effective training around each new AI tool and use case. A closer look at adoption level by seniority can inform your approach to scaling training.

Seniority adoption gaps underscore training opportunities

The research found that C-suite communicators are twice as likely to use AI than less senior internal communicators — 83% of C-suiters surveyed said they use AI daily, compared with 41% of senior and mid-level respondents.

C-suiters were also 25% more optimistic about AI in internal comms than their less senior counterparts.

This cohort is not a monolith. Segmenting the C-suite by age found that 100% of C-suiters 43 and under use AI daily compared to just 58% of C-suiters age 44 and over.

These discrepancies in daily use and optimism can be solved with training. How’s that going so far?

While around half (53%) of C-suiters aged 43 and under said they were satisfied with the AI training they received compared with 42% of C-suiters aged 44 and over, a much wider training gap exists between the C-suiters surveyed and other communicators.

Just under a quarter (24%) of C-suiters said they were satisfied with their organization’s training, while the number of satisfied senior and mid-level comms pros was just 8%. Concerning as those numbers are, the dissatisfaction shouldn’t be mistaken for disengagement — 64% of communicators across all seniority levels said they want to learn more about AI’s applications for internal communications.

Putting this all together, we’re looking at a C-suite sample that’s more comfortable using AI for internal communications tasks, and happier with the training being offered, than others who sit in the function.

Considered alongside the paltry level of training satisfaction across the board, this makes sense — those who don’t consider their level of training to be sufficient are less willing to dive in.

While demonstrating comfort with ambiguity is a valuable leadership competency, the risk management remit of internal comms pros, coupled with the myriad reports on what happens when AI implementation scales irresponsibly, may explain the gap between a desire for training and satisfaction with the training received.

This begs the question of how specific and detailed the AI training that communicators currently is. Are you surveying your team to address the root causes and concerns driving apprehension? While most training includes a focus on human-centered prompt creation using generative AI tools to draft executive messages and employee engagement content, your training can also go much further to explore things like:

  • Launching and sustaining an effective cross-departmental AI task force.
  • Shaping AI governance and crafting internal guidelines for cross-functional use cases that prioritize transparency and security amid the latest regulatory developments.
  • Streamlining employee experience comms around recruitment, engagement analytics and the intranet.
  • Building a blueprint for successful AI implementation that aligns communicators on each step, from committees to execution.

These insights are a reminder that comfort levels and competence are not one and the same. The most effective upskilling programs are personalized to each employee’s role and preferred style of learning.

Learning those preferences from the outset, and then training your communications function in kind, will ensure that they are empowered and equipped to bring the rest of your workforce along the adoption curve.

For more on the internal comms AI gaps among various industries and company sizes, check out the full report here.

Ruder Finn will unpack the results during Ragan’s Internal Communications Conference, Oct. 16-18 at Microsoft HQ in Seattle, WA. Register now!

The post Closing the AI gap in internal communications appeared first on PR Daily.

]]>
https://www.prdaily.com/closing-the-ai-gap-in-internal-communications-between-buzz-and-actual-use/feed/ 0
AI for communicators: What’s new and what matters https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-11/ https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-11/#respond Wed, 18 Sep 2024 09:00:52 +0000 https://www.prdaily.com/?p=344422 A new OpenAI model was unveiled and California passes new AI regulations. AI tools and regulations continue to advance at a startling rate. Let’s catch you up quick. Tools and business cases AI-generated video continues to be a shiny bauble on the horizon. Adobe has announced a limited release of Adobe Firefly Video Model later […]

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
A new OpenAI model was unveiled and California passes new AI regulations.

AI tools and regulations continue to advance at a startling rate. Let’s catch you up quick.

Tools and business cases

AI-generated video continues to be a shiny bauble on the horizon. Adobe has announced a limited release of Adobe Firefly Video Model later this year. The tool will reportedly offer both text and image prompts and allow users to specify the camera angle, motion and other aspects to get the perfect shot. It also comes with the assurance that itis only trained on Adobe-approved images, and thus will come without the copyright complications some other tools pose.

The downside? Videos are limited to just 5 seconds. Another tool, dubbed Generative Extended, will allow the extension of existing clips through the use of AI. That will be available only through Premiere Pro. 

Depending on Firefly Video’s release date, this could be one of the first publicly available, reputable video AI tools. While OpenAI announced its own Sora model months ago, it remains in testing with no release date announced. 

 

 

And just as AI video is set to gain traction, Instagram and Facebook are set to make its labeling of AI-edited content less obvious to the casual scroller. Rather than appearing directly below the user’s name, the tag will now be tucked away in a menu. However, this only applies to AI edited content, not AI generated content. Still, it’s a slippery slope and it can be difficult to tell where one ends and the other begins.

Meta has also publicly admitted to training its LLM on all publicly available Facebook and Instagram posts made by adults, dating all the way back to 2007. Yes, that means your cringey college musings after that one philosophy class were used to feed an AI model. While there are opt-outs available in some areas, such as the EU and Brazil, Facebook has by and large already devoured your content to feed the voracious appetite of AI models. 

OpenAI, creator of ChatGPT, has created a new tool, OpenAI o1, that focuses on math and coding prompts. OpenAI says the tool spends“more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.”

While this high-end, scientifically focused tool may not be a fit for most communicators, other departments  may use these tools – which means communicators will be in charge of explaining the how and why of the tech internally and externally. 

In a quirkier use of AI, Google is testing a tool that allows you to create podcasts based on your notes. It’s an outcropping of notetaking app NotebookLM, creating two AI-generated “hosts” who can discuss your research and draw connections. According to The Verge, they’re fairly lifelike, with casual speech and enough smarts to discuss the topic in a way that’s interesting. This could be a great tool for creating internal podcasts for those with small budgets and no recording equipment. 

On a higher level, the Harvard Business Review examined the use of AI to help formulate business strategy. It found that the tool, while often lacking specifics on a business, is useful for identifying blind spots that human workers may miss. For instance, the AI was prompted to assist a small agricultural research firm identify what factors may impact their business in the future:

However, with clever prompting, gen AI tools can provide the team with food for thought. We framed the prompt as “What will impact the future demand for our services?” The tool highlighted seven factors, from “sustainability and climate change” to “changing consumer preferences” and “global population growth.” These drivers help Keith’s team think more broadly about demand.

In all cases, the AI required careful oversight from humans and sometimes produced laughable results. Still, it can help ensure a broad view of challenges rather than the sometimes myopic viewpoints of those who are entrenched in a particular field. 

OpenAI o1 will be a subscription tool, like many other high-end models today. But New York Magazine reports that despite the plethora of whizz-bang new tools on the market, tech companies are still trying to determine how to earn back the billions they’re investing, save a standard subscription model that’s currently “a race to the bottom.” 

ChatGPT has a free version, as do Meta and Google’s AI models. While upsell versions are available, it’s hard to ask people to pay for something they’ve become accustomed to using for free – just ask the journalism industry. But AI investment is eye-wateringly expensive. Eventually, money will have to be made.

Nandan Nilekani, co-founder of Infosys, believes that these models will become “commoditized” and the value will shift from the model itself to the tech stack behind it.

This will be especially true for B2B AI, Nilekani said.

“Consumer AI you can get up a chatbot and start working,” he told CNBC. “Enterprise AI requires firms to reinvent themselves internally. So it’s a longer haul, but definitely it’s a huge thing happening right now.” 

Regulation and risk 

The onslaught of new LLMs, tools and business use cases makes mitigating risk a priority for communicators in both the government and private sector.

As omnipresent recording artist Taylor Swift made headlines last week after endorsing Vice President Kamala Harris for president, she explained that the Trump campaign’s use of her likeness in AI deepfakes informed her endorsement.

“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” Swift wrote on Instagram. “It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.” 

This isn’t the first time that Swift has been subjected to the damage AI deepfakes– earlier this year, fake pornographic images of Swift were widely circulated on X.  

Last week, the Biden-Harris administration announced a series of voluntary commitments from AI model developers to combat the creation of non-consensual intimate images of adults and sexually explicit material of children. 

According to the White House, these steps include:

    • Adobe, Anthropic, Cohere, Common Crawl, Microsoft, and OpenAI commit to responsibly sourcing their datasets and safeguarding them from image-based sexual abuse. 
    • Adobe, Anthropic, Cohere, Microsoft, and OpenAI commit to incorporating feedback loops and iterative stress-testing strategies in their development processes, to guard against AI models outputting image-based sexual abuse.  
    • Adobe, Anthropic, Cohere, Microsoft, and OpenAI, when appropriate and depending on the purpose of the model, commit to removing nude images from AI training datasets.

While these actions sound great on paper, the lack of specifics and use of phrases like “responsibly sourcing” and “when appropriate” raise the question of who will ultimately make these determinations, and how a volunteer process can hold these companies accountable to change.

Swift’s words, meanwhile, underscore how much the rapid, unchecked acceleration of AI use cases exists as an existential issue for voters in affected industries. California Gov. Gavin Newsom understands this, which is why he signed two California bills aimed at giving performers and other artists more protection over how their digital likeness is used, even after their death.

According to Dateline:

A.B. 1836 expands the scope of the state’s postmortem right of publicity, including the use of digital replicas, meaning that an estate’s permission would be needed to use such technology to recreate the voice and likeness of a deceased person. There are exceptions for news, public affairs and sports broadcasts, as well as for other uses like satire, comment, criticism and parody, and for certain documentary, biographical or historical projects.

The other bill, A.B. 2602, bolsters protections for artists in contract agreements over the use of their digital likenesses. 

Newsom didn’t yet move on Bill SB 1047, though, which includes rules that require AI companies to share their plans to protect against manipulation of infrastructure.  He has until Sept. 30th to sign, veto or allow these other proposals to become law without his signature. Union SAG-AFTRA, the National Organization for Women and Fund Her all sent letters supporting the bill.

This whole dance is ultimately an audience-first exercise that will underscore just who Newsom’s audience is – is it his constituents, the big tech companies pumping billions into the state’s infrastructure, or a mix of both? The power of state governments to set a precedent that the federal government can model national regulation around cannot be understated. 

However Newsom responds, the pressure from California arrives at a time when Washington is proposing similar regulations. Last Monday, the U.S. Commerce Department said it was considering implementing detailed reporting requirements for advanced AI developers and cloud-computing providers to ensure their tech is safe and reliant against cyberattacks.

Reuters reports:

The proposal from the department’s Bureau of Industry and Security would set mandatory reporting to the federal government about development activities of “frontier” AI models and computing clusters.

It would also require reporting on cybersecurity measures as well as outcomes from so-called red-teaming efforts like testing for dangerous capabilities including the ability to assist in cyberattacks or lowering barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons.

That may explain why several tech executives met with the White House last week to discuss how AI data centers impact the country’s energy and infrastructure. The who’s-who list included Nvidia CEO Jensen Huang, OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei and Google President Ruth Porat along with leaders from Microsoft and several American utility companies.

Last month, Altman joined the Washington lobbying group Business Software Alliance, reported Semafor. The global group pushes a focus on “responsible AI” for enterprise business, a buzzword evangelized in owned media white papers across the world. 

Microsoft provides the most recent example of this, explaining its partnership with G42, an AI-focused holding group based in Abu Dhabi, as an example of how responsible AI can be implemented in the region.

Last week, Altman left OpenAI’s safety board, which was created this past May to oversee critical safety decisions around its products and operations. It’s part of the board’s larger commitment to independence, transparency and external collaboration. The board will be chaired by current OpenAI board members including Carnegie Mellon professor Zico Kolter, Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and ex-Sony EVP Nicole Seligman. 

Understood through the lens of a push for independence, Altman’s leaving the board soon after joining a lobbying group accentuates the major push and pull between effective internal accountability and federal oversight–companies. Voluntary actions like signing orders or publishing white papers are one way of showing ‘responsible AI use’ while still allowing companies to avoid more stringent regulation

Meanwhile, several pioneering AI scientists called for a coordinated global partnership to address risk, telling The New York Times that “loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity.” This response would empower watchdogs at the local and national levels to work in lockstep with one another.

We’re already seeing what a regulatory response looks like amid reports that Ireland’s Data Protection Commission is investigating Google’s Pathways Language Model 2 to determine if its policies pose a larger threat to individuals represented in the datasets. 

While a coordinated effort between the EU and the US may seem far-fetched for now, this idea is a reminder you have the power to influence regulation and policy at your organization and weigh in on the risks and rewards of strategic AI investments, before anything is decided at the federal level.

That doesn’t always mean influencing policies and guidelines, either. If a leader is going around ike Oracle co-founder Larry Ellison and touting their vision for expansive AI as a surveillance tool,  you can point to the inevitable blowback as a reason to vet their thought leadership takes first.

Positioning yourself as a guardian of reputation starts with mitigating risk. That’s when starting conversations around statements like Ellison’s surveillance state take or Altman’s resignation from OpenAI’s safety board forms a foundation for knowledge sharing that shapes sound best practices and empowers your company to move along the AI maturity curve responsibly. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-11/feed/ 0
AI for communicators: What’s new and what matters https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-10/ https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-10/#respond Wed, 04 Sep 2024 09:30:35 +0000 https://www.prdaily.com/?p=344251 A beloved social media tool skyrockets in price due to AI; California passes groundbreaking regulation bill. The recent Labor Day holiday has many of us thinking about how AI will impact the future of work. There are arguments to be made about whether the rise of the tech will help or hurt jobs – it’s […]

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
A beloved social media tool skyrockets in price due to AI; California passes groundbreaking regulation bill.


The recent Labor Day holiday has many of us thinking about how AI will impact the future of work. There are arguments to be made about whether the rise of the tech will help or hurt jobs – it’s a sought-after skill for new hires, but one company is using AI as a pretext for cutting thousands of roles. And in the short-term, the rapid expansion of technology is making at least some tools used by workers more expensive.

Here’s what communicators need to know about AI this week.

Tools

Many tech companies continue to go all-in on AI – and are charging for the shiny new features.

Canva, a beloved tool of social media managers, has ratcheted up prices up to 300% in some cases, The Verge reported. Some Canva Teams subscribers report prices leaping from $120 per year for a five-person team to $500. Some of those lower prices were legacy, grandfathered rates, but nonetheless, it’s an eye-watering increase that Canva attributes in part to new AI-driven design tools. But will users find that worth such a massive price increase? 

Canva’s price hikes could be a response to the need for companies to recoup some of their huge investments in AI. As CNN put it after Nvidia’s strong earnings report nonetheless earned shrugs: “As the thrill of the initial AI buzz starts to fade, Wall Street is (finally) getting a little more clear-eyed about the actual value of the technology and, more importantly, how it’s going to actually generate revenue for the companies promoting it.” 

While Canva seems to be answering that question through consumer-borne price hikes, OpenAI is trying to keep investment from companies flowing in. It’s a major pivot for a company founded as a nonprofit that now requires an estimated $7 billion per year to operate, compared to just $2 billion in revenue. Some worry that the pursuit of profits and investment is coming at the expense of user and data safety. 

Meanwhile, Google is launching or relaunching a number of new tools designed to establish its role as a major player in the AI space. Users can once again ask the Gemini model to create images of people – an ability that had been shut down for months after the image generator returned bizarre, ahistorical results and appeared to have difficulties creating images of white people when asked. While it’s great to have another tool available, Google’s AI woes have been mounting as multiple models have proven to be not ready for primetime upon launch. Will new troubles crop up? 

Google is also expanding the availability of its Gmail chatbot, which can help surface items in your inbox, from web only to its Android app – though the tool is only available to premium subscribers.

While using AI to search your inbox is a fairly understandable application, some new frontiers of AI are raising eyebrows. “Emotion AI” is when bots learn to read human emotion, according to TechCrunch. This goes beyond the sentiment analysis that’s been a popular tool on social media and media monitoring for years, reading not just text but also human expressions, tone of voice and more. 

While this has broad applications for customer service, media monitoring and more, it also raises deep questions about privacy and how well anyone, including robots, can actually read human emotion. 

Another double-edged sword of AI use is evidenced use of AI news anchors in Venezuela, Reuters reports. 

As the nation launches a crackdown on journalists after a highly disputed election, a Colombian nonprofit uses AI avatars to share the news  without endangering real people. The project’s leader says it’s to “circumvent the persecution and increasing repression” against journalists. And while that usage is certainly noble, it isn’t hard to imagine a repressive regime doing the exact opposite, using AI puppets to spread misinformation without revealing their identity or the source of their journalism to the world.

 

 

Risks 

Many  journalism organizations aren’t keen for their work to be used by AI models – at least not without proper pay. Several leading news sites have allowed for their websites to be crawled for years, usually to help with search engine rankings. 

Now those same robots are being used to feed LLMs and news sources, especially paywalled sites, then locking the door by restricting where on their sites these bots can crawl

Apple specifically created an opt-out method that allows sites to continue to be crawled for existing purposes – think search – without allowing the content to be used in AI training. And major news sites are opting out in droves, holding out for specific agreements that will allow them to be paid for their work.

This creates a larger issue. AI models are insatiable, demanding a constant influx of content to continue to learn, grow and meet user needs. But as legitimate sources of human-created content are shut off and AI-created content spreads, AI models are increasingly trained on more AI content, creating an odd content ouroboros. If it trains too much on AI content that features hallucinations, we can see a model that becomes detached from reality and experiences “model collapse.”

That’s bad. But it seems in some ways inevitable as more and more AI content takes over the internet and legitimate publishers (understandably) want to be paid for their work.

But even outside of model collapse, users must be vigilant about trusting today’s models. A recent case of weird AI behavior went viral this week when it was found that ChatGPT was unable to count how many times the letter “R” appears in “strawberry.” It’s three, for the record, yet ChatGPT insisted there were only two. Anecdotally, this reporter has had problems getting ChatGPT to accurately count words, even when confronted with a precise word count. 

It’s a reminder that while technology can seem intelligent and confident, it’s often confidently wrong. 

Kevin Roose, tech columnist for the New York Times, also discovered this week just how difficult it is to change AI’s mind about something. In this case, the subject was himself: Roose rocketed to fame last year when Microsoft’s AI bot fell in love with him and tried to convince him to leave his wife. 

As a result, many AI models don’t seem too keen on Roose, with one even declaring, “I hate Kevin Roose.”

But changing that viewpoint was difficult. Roose’s options were getting websites to publish friendly stories showing that he wasn’t antagonistic toward AI (in other words, public relations) or creating his own website with friendly transcripts between him and chatbots, which AI models would eventually crawl and learn. A quicker and dirtier approach involved leaving “secret messages” for AI in white text on his website, as well as specific sequences designed to return more positive responses.

On the one hand, manipulating AI bots is likely to become the domain of PR professionals in the near future, which could be a boon for the profession. On the other hand, this shows just how easily manipulated AI bots can be – for good and for evil.

And even when used with positive intent, AI can still return problematic results. A study featured in Nature found that AI models exhibited strong dialect prejudice that penalizes people for their use of African American Vernacular English, a dialect frequently used by Black people in the United States. “Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death,” the study finds.

This is what happens when technology is trained on so much human writing: it’s going to pick up the flaws and prejudices of humans as well. Without strong oversight, it’s likely to cause major problems for marginalized people. 

 Finally, there is debate over what role AI is having in the U.S. presidential elections. Former president Donald Trump himself appeared to be taken in by a deepfake where Taylor Swift endorsed him (no such thing ever happened), sharing it on his Truth Social platform. AI is being used by both camps’ supporters, sometimes to generate obviously fake imagery, such as Trump as a body builder, while some are more subtle. 

But despite its undeniable presence in the election, it isn’t clear that AI is actually reshaping much in the race. State actors, such as Russia, are using the tools to try to manipulate the public, yes, but a report from Meta indicated that the gains were incremental and this year’s election isn’t significantly different from any other in regards to disinformation. 

But that’s only true for now. Vigilance is always required. 

Regulation

While some continue to question the influence of deepfakes on our democratic process, California took major steps last week to protect workers from being exploited by deepfakes.

California Assembly Bill 2602 was passed in the California Senate and Assembly last week to regulate the use of Gen AI for performers, including those on-screen and those who lend their voices or bodily likeness to audiobooks and videogames. 

While the bipartisan support the bill enjoyed is rare, rarer still is the lack of opposition from industry groups, including the Motion Picture Association, which represents Netflix, Paramount Studios, Sony, Warner Bros. and Disney, according to NPR

The bill also includes rules that require AI companies to share their plans to protect against manipulation of infrastructure. 

NPR reports:

The legislation was also supported by the union SAG-AFTRA, whose chief negotiator, Duncan Crabtree-Ireland, points out that the bill had bipartisan support and was not opposed by industry groups such as the Motion Picture Association, which represents studios such as Netflix, Paramount Pictures, Sony, Warner Bros., and Disney. A representative for the MPA says the organization is neutral on the bill.

Bill S.B. 1047 also advanced. That bill would require AI companies to share safety proposals to protect infrastructure against manipulation, according to NPR.

The AP reports:

“It’s time that Big Tech plays by some kind of a rule, not a lot, but something,” Republican Assemblymember Devon Mathis said in support of the bill Wednesday. “The last thing we need is for a power grid to go out, for water systems to go out.”

The proposal, authored by Democratic Sen. Scott Wiener, faced fierce opposition from venture capital firms and tech companies, including OpenAI, Google and Meta, the parent company of Facebook and Instagram. They say safety regulations should be established by the federal government and that the California legislation takes aim at developers instead of targeting those who use and exploit the AI systems for harm.

California Democratic Governor Gavin Newsom has until Sept. 30th to sign, veto or allow these proposals to become law without his signature. This puts all eyes on Newsom to either ratify or kill the potential laws that multiple stakeholders have different perspectives on. 

Given the opposition from major California employers like Google, there is a chance Newsom vetoes S.B. 1047, Vox reported

And while tech giants oppose California’s Bill S.B. 1047, we have a hint at what they’d like to see happen at the federal level instead.

Last Thursday, the U.S. AI Safety Institute announced it had come to a testing and evaluation agreement with OpenAI and Anthropic, according to CNBC, that allows the institute to “receive access to major new models from each company prior to and following their initial public release.” 

Established after the Biden-Harris administration’s executive order on AI was issued last fall, the Institute exists as part of the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST).

According to the NIST:

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

Additionally, the U.S. AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute. 

If this public-private partnership agreement seems vague on details and methodology, that’s because it is. The lack of detail underscores a major criticism that Biden’s executive order was light on specifics and mechanisms for enforcement. 

The outsized push from big tech to settle regulation at the federal level makes sense when one considers the outsized investments most major companies have made in lobbyists and public affairs specialists.

“The number of lobbyists hired to lobby the White House on AI-related issues grew from 323 in the first quarter to 931 by the fourth quarter,” reports Public Citizen.  

For communicators, this push and pull is a reminder that regulation and responsible use must start internally – and that, whatever happens in California by the end of the month, waiting for tangible direction from either federal or state governments may be a path to stalled progress.

Without some required reporting and oversight, regulators will continue to struggle with the pace of AI developments. But what would responsible safety measures look like in practice?

A recent report from the Financial Times looks at the EU’s AI Act, which was ratified this past spring, to answer this question. The report notes that the AI Act ties systemic risk to the power of computing metrics, and says this won’t cut it.

According to FT:

The trouble is that this relates to the power used for training. That could rise, or even fall, once it is deployed. It is also a somewhat spurious number: there are many other determinants, including data quality and chain of thought reasoning, which can boost performance without requiring extra training compute power. It will also date quickly: today’s big number could be mainstream next year. 

When the efficacy and accuracy of a risk management strategy depends largely on how you measure potential risks, agreeing on standardized parameters for responsible reporting and sharing of data remains an opportunity.

While many consider the EU’s AI Act a model that the rest of the world will follow (similar to Global Data Protection Regulation or GDPR), the recent push in California suggests that the state’s outsized investments in AI are propelling it to lead by example even faster. 

AI at work

While thinking about how to deploy AI responsibly often comes back to secure internal use cases, a recent report from Slingshot found that nearly two-thirds of employees primarily use AI to double-check their work. That’s higher than the number of workers using AI for initial research, workflow management and data analysis.

“While employers have specific intentions for AI in the workplace, it’s clear that they’re not aligned with employees’ current use of AI. Much of this comes down to employees’ education and training around AI tools,” Slingshot Founder Dean Guida said in a press release. 

This may account for a slight dip in US-based jobs that require AI skills, as measured by Stanford University’s annual AI Index Report. 

The report also looked at which AI skills were most sought after, which industries will rely on them the most and which states are leading in AI-based jobs.

The Oregon Capital Chronicle sifted through the report and found:

Generative AI skills, or the ability to build algorithms that produce text, images or other data when prompted, were sought after most, with nearly 60% of AI-related jobs requiring those skills. Large language modeling, or building technology that can generate and translate text, was second in demand, with 18% of AI jobs citing the need for those skills.

The industries that require these skills run the gamut — the information industry ranked first with 4.63% of jobs while professional, scientific and technical services came in second with 3.33%. The financial and insurance industries followed with 2.94%, and manufacturing came in fourth with 2.48%.

California — home to Silicon Valley — had 15.3%, or 70,630 of the country’s AI-related jobs posted in 2023. It was followed by Texas at 7.9%, or 36,413 jobs. Virginia was third, with 5.3%, or 24,417 of AI jobs.

This outsized presence of generative AI skills emphasizes that many jobs that don’t require a technical knowledge of language modeling or building will still involve the tech in some fashion.

The BBC reports that Klarna plans to get rid of almost half of its employees by implementing AI in marketing and customer service. It reduced its workforce from 5,000 to 3,800 over the past year, and wants to slash that number to 2,000.

While CIO’s reporting frames this plan as Klarna “helping reduce payroll in a big way,” it also warns against the risk associated with such rapid cuts and acceleration:

Responding to the company’s AI plans, Terra Higginson, principal research director at Info-Tech Research Group, said Wednesday, “AI is here to enhance employee success, not render them obsolete. A key trend for 2025 will be AI serving as an assistant rather than a replacement. It can remove the drudgery of mundane, monotonous, and stressful tasks.”

“(Organizations) that are thinking of making such drastic cuts should look into the well-proven productivity paradox and tread carefully,” she said. “There is a lot of backlash against companies that are making cuts like this.”

Higginson’s words are a reminder that the reputational risk of layoffs surrounding AI is real. As AI sputters through the maturity curve at work, it also reaches an inflection point. How organizations do or don’t communicate their use cases and connections to the talent pipeline will inevitably shape their employer brand.

This is also a timely reminder that, whether or not your comms role sits in HR, now is the time to study up on how your state regulates the use of AI in employment practices. 

Beginning in January 2026, an amendment to the Illinois Human Rights Act will introduce strict guidelines prohibiting AI-based decisions on hiring or promotion. Such behavior is framed as an act of discrimination.

This builds on the trend of the Colorado AI Act, which more broadly focused on the public sector when it was signed into law this past May, and specifically prohibits algorithmic discrimination for any “consequential decision.”

While you work with HR and IT partners to navigate bias in AI, remember that training employees on how to use these schools isn’t just a neat feature of your employer brand, but a vital step to ensure your talent is trained to keep your business competitive in the market.

BI reports:

Ravin Jesuthasan, a coauthor of “The Skills-Powered Organization” and the global leader for transformation services at the consulting firm Mercer, told BI that chief human-resources officers and other leaders would need to think of training — particularly around AI — as something that’s just as important as, for example, building a factory.

“Everyone needs to be really facile with AI,” he said. “It’s a nonnegotiable because every piece of work is going to be affected.”

He said experimenting with AI was a good start but not a viable long-term strategy. More organizations are becoming deliberate in how they invest, he added. That might look like identifying well-defined areas where they will deploy AI so that everyone involved uses the technology.

Jesuthasan’s words offer the latest reminder that comms is in a key position to coordinate experimentation efforts and investments in tech with an allocated investment in training that includes not only a platform for instruction and education, but time itself -– dedicated time for incoming talent to train on the tools and use cases during onboarding and dedicated time for high-performers to upskill.

Treating this as an investment with equal weight will ultimately enhance your employer brand, protect your reputation and future-proof your organization all at once.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-10/feed/ 0
AI news for communicators: What’s new and notable https://www.prdaily.com/ai-news-for-communicators-whats-new-and-notable-2/ https://www.prdaily.com/ai-news-for-communicators-whats-new-and-notable-2/#respond Wed, 21 Aug 2024 10:15:45 +0000 https://www.prdaily.com/?p=344121 What you need to know about the latest research and developments on AI risk and regulation. Last week on “The Daily Show,” Mark Cuban suggested that the AI race is ultimately a matter of power, saying that “ nothing will give you more power than military and AI.” British Historian Lord Acton would have offered […]

The post AI news for communicators: What’s new and notable appeared first on PR Daily.

]]>
What you need to know about the latest research and developments on AI risk and regulation.

Last week on “The Daily Show,” Mark Cuban suggested that the AI race is ultimately a matter of power, saying that “ nothing will give you more power than military and AI.”

British Historian Lord Acton would have offered a fitting response with his famous maxim, “Absolute power corrupts absolutely. ” And as communicators continue to see the battle between private company lobbying efforts, state regulation, and federal regulation play out in real-time, it’s hard to argue with Cuban’s sentiment. 

In notable news for communicators, a controversial California AI regulation bill moves toward a vote at the end of the month, the Democratic National Convention takes over Chicago amid an influx of deepfakes attempting to sway voter sentiment about the 2024 presidential election.

Here’s what communicators need to know about AI this week.

Risks 

With the DNC hitting Chicago this week, coverage is fixated on the surrogates, speeches and memorable moments leading up to Vice President Kamala Harris’ formal acceptance of the presidential nomination Thursday. 

While the November elections will bring about many historic firsts, the widespread applications of deepfake technology to misrepresent candidates and positions is also unprecedented. 

On Monday, Microsoft hosted a luncheon at Chicago’s Drake Hotel to train people on detecting deceptive AI content and using tools that can help deepfakes as AI-manipulated media becomes more widespread.

The Chicago Sun-Times reports:

“This is a global challenge and opportunity,” says Ginny Badanes, general manager of Microsoft’s Democracy Forward Program. “While we’re, of course, thinking a lot about the U.S. election because it’s right in front of us, and it’s obviously hugely consequential, it’s important to look back at the big elections that have happened.”

Badanes says one of the most troubling political deepfake attacks worldwide happened in October in Slovakia just two days before the election for a seat in parliament in the central European country. AI technology was used to create a fake recording of a top political candidate bragging about rigging the election. It went viral. And the candidate lost by a slim margin.

In a report this month, Microsoft warned that figures in Russia were “targeting the U.S. election with distinctive video forgeries.”

These myriad examples highlight a troubling pattern of bad actors attempting to drive voter behavior. This plays out as an AI-assisted evolution of the microtargeting campaign that weaponized the psychographic profiles of Facebook users to flood their feeds with disinformation ahead of the 2016 election.

Once again, the bad actors are both foreign and domestic. Trump falsely implied that Taylor Swift endorsed him this week by posting fake images of Swift and her fans in pro-Trump garb. Last week, Elon Musk released image generation capabilities on Grok, his AI chatbot on X, which allows users to generate AI images with little filters or guidelines. As Rolling Stone reports, it didn’t go well

This may get worse before it gets better, which could explain why The Verge reports that the San Francisco City Attorney’s office is suing 16 of the most popular “AI undressing” websites that do exactly what it sounds like they do.

It may also explain why the world of finance is starting to recognize how risky of an investment AI is in its currently unregulated state.

Marketplace reports that the Eurekahedge AI Hedge fund has lagged in the S&P 500, “proving that the machines aren’t learning from their investing mistakes.”

Meanwhile, a new report from LLM evaluation platform Arize found that one in five Fortune 500 companies now mention generative AI or LLMs in their annual reports. Among them, researchers found a 473.5% increase in the number of companies that framed AI as a risk factor since 2022.

What could a benchmark for AI risk evaluation look like? Bo Li, an associate professor at the University of Chicago, has led a group of colleagues across several universities to develop a taxonomy of AI risks and a benchmark for evaluating which LLMs break the rules most.

Li and the team analyzed government AI regulations and guidelines in the U.S., China and the EU alongside the usage policies of 16 major AI companies. 

WIRED reports:

Understanding the risk landscape, as well as the pros and cons of specific models, may become increasingly important for companies looking to deploy AI in certain markets or for certain use cases. A company looking to use a LLM for customer service, for instance, might care more about a model’s propensity to produce offensive language when provoked than how capable it is of designing a nuclear device.

Bo says the analysis also reveals some interesting issues with how AI is being developed and regulated. For instance, the researchers found government rules to be less comprehensive than companies’ policies overall, suggesting that there is room for regulations to be tightened.

The analysis also suggests that some companies could do more to ensure their models are safe. “If you test some models against a company’s own policies, they are not necessarily compliant,” Bo says. “This means there is a lot of room for them to improve.”

This conclusion underscores the impact that corporate communicators can make on shaping internal AI policies and defining responsible use cases. You are the glue that can hold your organization’s AI efforts together as they scale. 

Much like a crisis plan has stakeholders across business functions, your internal AI strategy should start with a task force that engages heads across departments and functions to ensure every leader is communicating guidelines, procedures and use cases from the same playbook– while serving as your eyes and ears to identify emerging risks. 

Regulation

Last Thursday, the California State Assembly’s Appropriations Committee voted to endorse an amended version of a bill that would require companies to test the safety of their AI tech before releasing anything to the public. Bill S.B. 1047 would let the state’s attorney general sue companies if their AI caused harm, including deaths or mass property damage. A formal vote is expected by the end of the month.

Unsurprisingly, the tech industry is fiercely debating the details of the bill.

The New York Times reports:

Senator Scott Wiener, the author of the bill, made several concessions in an effort to appease tech industry critics like OpenAI, Meta and Google. The changes also reflect some suggestions made by another prominent start-up, Anthropic.

The bill would no longer create a new agency for A.I. safety, instead shifting regulatory duties to the existing California Government Operations Agency. And companies would be liable for violating the law only if their technologies caused real harm or imminent dangers to public safety. Previously, the bill allowed for companies to be punished for failing to adhere to safety regulations even if no harm had yet occurred.

“The new amendments reflect months of constructive dialogue with industry, start-up and academic stakeholders,” said Dan Hendrycks, a founder of the nonprofit Center for A.I. Safety in San Francisco, which helped write the bill.

A Google spokesperson said the company’s previous concerns “still stand.” Anthropic said it was still reviewing the changes. OpenAI and Meta declined to comment on the amended bill.

Mr. Wiener said in a statement on Thursday that “we can advance both innovation and safety; the two are not mutually exclusive.” He said he believed the amendments addressed many of the tech industry’s concerns.

Late last week, California Congresswoman Nancy Pelosi issued a statement sharing her concerns about the bill. Pelosi cited Biden’s AI efforts and warned against stifling innovation. 

“The view of many of us in Congress is that SB 1047 is well-intentioned but ill-informed,” Pelosi said.  

Pelosi cited the work of top AI researchers and thought leaders decrying the bill, but offers little in the realm of next steps for the advancement of federal regulation. 

In response, California senator and bill sponsor Scott Wiener, disagreed with Pelosi. 

“The bill requires only the largest AI developers to do what each and every one of them has repeatedly committed to do: Perform basic safety testing on massively powerful AI models,” Wiener added.

This disconnect highlights the frustrating push and pull between those who warn against taking an accelerationist mentality with AI and those who publicly cite the stifling of innovation -–a key talking point of those doing AI policy and lobbying work on behalf of big tech. 

It also speaks to the limits of thought leadership. Consider the op-ed published last month by Amazon SVP of Global Public Policy and General Counsel David Zapolsky that calls for an alignment on a global responsible AI policy. The piece emphasizes Amazon’s willingness to collaborate with the government on “voluntary commitments,” emphasizes the company’s research and deployment of responsible use safeguards in its products and convincingly positions Amazon as the stewards of responsible AI reform.

While this piece does a fantastic job positioning Amazon as an industry leader, it also doesn’t mention federal regulation once. The idea of private-public collaboration being a sufficient substitute for formal regulation surfaces indirectly through multiple mentions of collaboration, though, setting a precedent for the recent AI lobbyist influx on The Hill. 

The number of lobbyists hired to lobby the White House on AI-related issues grew from 323 in the first quarter to 931 by the fourth quarter,” reminds Public Citizen. 

As more companies stand up their philosophies on responsible AI use at the expense of government oversight, it’s crucial to understand what daylight exists between your company’s external claims about the efficacy of its responsible AI efforts and how those efforts are playing out on the inside.

If you’re at an organization large enough to have public affairs or public policy colleagues in the fold, this is a reminder that aligning your public affairs and corp comms efforts with your internal efforts is a crucial step to mitigating risk. 

Those who are truly able to regulate their deployment and use cases internally will be able to explain how and source guidelines for ethical use cases, continued learning and so much more. True thought leadership will not take the form of product promotion, but showing the work through actions and results.  

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

The post AI news for communicators: What’s new and notable appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-news-for-communicators-whats-new-and-notable-2/feed/ 0
AI for communicators: What’s new and what’s next https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-11/ https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-11/#respond Thu, 25 Jul 2024 09:00:55 +0000 https://www.prdaily.com/?p=343829 New LLMs proliferate but content withers. The tech goes fast and the regulation goes slow. That could be the opening sentence for nearly any version of this story, but it seems especially apt this week as Apple rolls out a new LLM, Meta looks to take the crown for most popular model in the world […]

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
New LLMs proliferate but content withers.

The tech goes fast and the regulation goes slow.

That could be the opening sentence for nearly any version of this story, but it seems especially apt this week as Apple rolls out a new LLM, Meta looks to take the crown for most popular model in the world and regulation continues to chug along without much oomph.

Here’s what communicators need to know about AI this week.

Tools and advancements

The past few weeks have been one of the most consequential in America’s recent history, with the attempted assassination of Donald Trump to Joe Biden’s choice not to seek reelection. 

But if you were trying to catch up on the news via AI chatbot, you might have been left in the cold. Some chatbots were hopelessly behind in the news, even claiming that the attempted assassination was “misinformation” and refusing to answer questions about who was running for president, according to the Washington Post

Some bots fared better than others, namely Microsoft’s Copilot, which includes plentiful links to news sources. But it reveals the dangers in trusting AI as a search engine, especially for breaking news. 

While this particular use case is lagging behind, others are zooming ahead with tons of new features and technological advancements. Adobe is more deeply integrating AI tools into its classic suite of Photoshop and Illustrator, allowing users to create images, textures and other assets using text prompts, TechCrunch reports. While this could help experienced designers save time, it also raises the fear of those same experienced designers being replaced by fast, low-cost AI solutions. Designers also have concerns over how their intellectual property could be used to feed AI models. 

 

 

Samsung  also released a new sketch-to-image tool that allows you to draw a doodle that can then be illustrated  using generative AI. This can be fun when it’s just a sketch, but it can warp reality in some worrying ways when you can add an AI-generated element to an existing photo. 

You’ll only hear more about these weighty issues in the coming weeks and months. 

LLM laggard Apple is finally working on its own tool, the rolls-off-the-tongue DCLM-Baseline-7B. That 7B stands for “7 billion,” or how many parameters it was trained on. ZDNet reports that it performs competitively against other models and is truly open source, allowing  other organizations to build on Apple’s work.

We’ll have to see exactly how Apple integrates this model into other projects. 

Meanwhile, Meta has its sights set on the AI throne currently occupied by OpenAI’s ChatGPT. The company recently released Llama version 3.1, the newest version of its open-source tool. The company claims that the new version outperforms major competitors like ChatGPT and Claude by several metrics. Meta, which has heavily incorporated the tool into its social platform like Facebook and Instagram, predicts that it will become the most-used AI platform in the world. Given Meta’s reach, that would make sense. But the question is, what is the demand for generative AI and search as part of a social platform?

Risks 

In news, that’s equal parts depressing and unsurprising, deepfakes of Vice President and presumptive Democratic presidential candidate Kamala Harris began to re-circulate quickly after she stepped into her new role.

The video combines real footage of Harris giving a speech at Howard University with manipulated audio intended to make her sound as if she is slurring her words and speaking in nonsensical circles, Mashable reported

TikTok pulled the video down, but not before it racked up more than 4 million views. X, which has no prohibition against misinformation and deepfakes, allowed the video to remain up, albeit with a Community Note that identifies it for what it is. It’s a reminder of the power of lies to spread around the world before the truth gets its pants on, as well as the brand dangers inherent on X. 

But the AI industry is exposing others to risks as well through unvetted use of data. Figma’s “Make Designs” tool had to be pulled from the market after users asked it  create a weather app and discovered it spit out an example  eerily similar to Apple’s iconic Weather app.

If a user were to take that app to market, they could wind up in serious legal trouble.

Figma acknowledges that some designs the tool was trained on weren’t vetted carefully enough. That’s cold comfort to companies who might rely on generative AI to provide designs and data they can trust. 

Relatedly, AI chatbot Perplexity is being accused of plagiarism by Condé Nast, claiming that the tool is using the magazine company’s reporting without permission or credit. While there’s just a cease-and-desist letter at this stage, it’s safe to guess that a lawsuit may soon follow.

In response to that deluge of lawsuits, many generative AI companies are working carefully to provide the vetted, trustworthy, approved content that businesses demand. Some, like Getty, are paying real human photographers to take pictures that can feed their AI models and ensure that every bit of information in the model is on the up-and-up. 

That, in turn, puts AI companies without those same resources in a bind when it comes time to train their models. According to researchers from the Data Provenance Initiative, 5% of all data and 25% of high-quality data has been restricted from use in AI models. As LLMs require a steady stream of data to stay up-to-date, this will pose new challenges for AI companies, forcing them to pay, adapt or die. 

But even paying can cause controversy. A group of academics were outraged to discover their content had been sold by their publisher without their permission to Microsoft for use in AI. They were neither asked nor informed about the deal, according to reports. The importance of communicating how data will be used to all parties involved will only become more vital. 

Investors are beginning to suspect we’re in an AI bubble as big tech companies pour more and more cash into AI investments that have yet to pay off and startups proliferate and earn tons of funding. 

Now, this doesn’t mean AI will disappear or cease to be a hot technology any more than the internet disappeared during the dot-com bubble. But it does mean that the easy days of slapping “AI” onto a product or company name and raking in the dough may be coming to an end, even as many of us still strive to figure out how to incorporate these tools in our day-to-day workflow. 

Regulation

Coverage of Kamala Harris’ campaign launch is awash with information on of she stands on the issues that matter to American voters. Of course, that includes AI regulation.

TechCrunch highlights Harris’ roots as San Francisco’s district attorney and California’s attorney general before becoming a senator in 2016.

According to TechCrunch:

Some of the industry’s critics have complained that she didn’t do enough as attorney general to curb the power of tech giants as they grew.

At the same time, she has been willing to criticize tech CEOs and call for more regulation. As a senator, she pressed the big social networks over misinformation. During the 2020 presidential campaign, when rival Elizabeth Warren was calling for the breakup of big tech, Harris was asked whether companies like Amazon, Google and Facebook should be broken up. She instead said they should be “regulated in a way that we can ensure the American consumer can be certain that their privacy is not being compromised.”

As vice president, Harris has also spoken about the potential for regulating AI, saying that she and President Biden “reject the false choice that suggests we can either protect the public or advance innovation.”

Five senators sent a letter to OpenAI on Monday asking for context around its safety and employment practices following a group whistleblower complaint that alleged the company prevented staff from warning regulators about the risks its AI advancements posed.

The Hill reports:

Led by Sen. Brian Schatz (D-Hawaii), the group of mostly Democratic senators asked OpenAI CEO Sam Altman about the AI startup’s public commitments to safety, as well as its treatment of current and former employees who voice concerns. 

“Given OpenAI’s position as a leading AI company, it is important that the public can trust in the safety and security of its systems,” Schatz, alongside Sens. Ben Ray Lujan (D-N.M.), Peter Welch (D-Vt.), Mark Warner (D-Va.) and Angus King (I-Maine), wrote in Monday’s letter. 

“This includes the integrity of the company’s governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies,” they continued. 

Last week, OpenAIjoined several tech companies including Nvidia, Google, Microsoft, Amazon, Intel and others to form the Coalition for Secure AI (CoSAI), which will aim to “address a ‘fragmented landscape of AI security’ by providing access to open-source methodologies, frameworks, and tools,” according to Verge.

Functioning within the nonprofit Organization for the Advancement of Structured Information Standards (OASIS), CoSAI will focus on three goals: Developing AI security best practices, addressing the challenges of AI and securing AI applications. The details still seem a little vague.

It’s worth noting that CoSAI’s aims fail to address calls for federal regulation in lieu of a formalized working group, and lawmakers will be watching to see what specific best practices the group comes up with. 

This shouldn’t suggest that some CoSAI members aren’t advocating for regulation, too. Earlier this week, Amazon SVP of Global Public Policy and General Counsel David Zapolsky posted an article on the company’s website advocating for global regulation – framing the need as a matter of economic prosperity and security. 

“It’s now very clear we can have rules that protect against risks, while also ensuring we don’t hinder innovation,” Zapolsky wrote.  “But we still need to secure global alignment on responsible AI measures to protect U.S. economic prosperity and security.”

Zapolsky’s suggestions include:

  • Standardized commitments about responsible AI deployment, like Amazon’s including invisible watermarks in its image generation tool to reduce the spread of disinformation
  • Uniform transparency from tech companies around how they are developing and deploying AI. Zapolosky notes that Amazon Web Services (AWS) created AI service cards to let customers know about the limitations of its tech along with responsible AI best practices they can use to build applications safely.

While aspects of Zapolosky’s letter read as a promotional recap of Amazon’s progress in the space, showing the company’s work and using that work as a catalyst for a larger conversation about regulation may be what bridges the current disconnect between big tech companies who think they can solve it themselves and a federal government that seems unable to move at the rapid pace of AI acceleration.

MIT Technology Review Senior Reporter Melissa Heikkilä reported that one year ago, on July 21, 2023, seven leading AI companies including Amazon, Google, Microsoft and OpenAI committed to developing eight voluntary commitments for developing safe and responsible AI.

On the anniversary of that commitment, Heikkilä asked the companies for details on their progress and asked experts to weigh in: 

Their replies show that the tech sector has made some welcome progress, with big caveats.

“One year on, we see some good practices towards their own products, but [they’re] nowhere near where we need them to be in terms of good governance or protection of rights at large,” says Merve Hickok, the president and research director of the Center for AI and Digital Policy, who reviewed the companies’ replies as requested by MIT Technology Review. Many of these companies continue to push unsubstantiated claims about their products, such as saying that they can supersede human intelligence and capabilities, adds Hickok. 

But it’s not clear what the commitments have changed and whether the companies would have implemented these measures anyway, says Rishi Bommasani, the society lead at the Stanford Center for Research on Foundation Models, who also reviewed the responses for MIT Technology Review.  

It’s no surprise that formal regulations continue to stall. As previously reported, AI lobbying has surged drastically YOY and leading tech companies have demonstrated their vested interest in proposing their own safeguards for responsible AI over helping Uncle Sam standardize something that will hold them accountable.

Amazon’s letter is a notable exception that stands out as an example of how an organization’s thought leaders can highlight the work and advancements as a conversation starter.. 

The regulation conversation continues to prove fascinating, even as it moves slowly. With headlines continuing to focus on the November elections, it will be worth watching to see what progress makes on the way out and what tangible policy Harris is willing to shape. 

AI at work

As federal AI regulation continues to move at a sluggish pace, most businesses are also still in the early stages of adoptingAI.

Axios shared the results of AI platform ServiceNow’s inaugural AI Maturity Index, which surveyed nearly 4,500 respondents across 21 countries, and found that “many companies have struggled to go from experiments into full-scale use of the technology.” 

“The study assigned maturity scores between 1 and 100,” reported Axios. “ The average score was 44 and the highest score was just 71. Only about 1 in 6 companies scored higher than 50.”

ServiceNow Chief Customer Office Chris Bedi told Axios that the adoption of AI use cases is ultimately a leadership competency.

 “You have to be able to get up in front of your team and say, ‘Here’s how your roles are going to evolve in an AI-first world,'” he said. 

Bedi also broke the maturity curve for adoption into two modes, defining the first mode around incremental improvements and the second mode as taking the leap to design new models and augment ways of working.

“Mode two is harder,” continued Bedi. “It’s saying, ‘If we were to assume the models are good enough, and AI was pervasive, how would we redesign these departments, these jobs, the organization, the enterprise, from scratch?’ It’s a much harder intellectual exercise.”

Though much of the piece reads as promotional, it highlights the wisdom and innovation of those at the frontline of AI advancements to drive forward the maturity Bedi advocates for. This will require a partnership between those doing the work and the leaders who ultimately sign off on the budget to be advocates.

While many organizations are slow to move to this second mode, former Amazon AI engineer Ashish Nagar recently explained how he created the customer service intelligence platform Level AI to address productivity challenges in the automated customer service industry:

“Frontline workers, like customer service workers, are the biggest human capital in the world,” Nagar told TechCrunch. “So, my idea was to use ambient computing — AI that you can just talk to and it listens in the background — to augment human work.”

While the TechCrunch piece and the Axios report both hinge on the expertise of a service provider,  Forbes contributor Bernard Marr asks what promised innovations will really prove transformative across industries and which are exercises in marketing:

While the picture being painted points to an imminent revolution across all industries, in reality, the impact is likely to be more iterative and nuanced.

Predictions of astronomic leaps in value that AI will add to industries may be achievable in theory. But challenges around regulation and data privacy, as well as technical challenges such as overcoming AI hallucination and bias, may not be simple to solve.

Overall, this means that while I believe AI will have truly profound implications for business, jobs and industry, the pace of this transformation may well be slower than some of the hype and hyperbole suggests – in the short term, at least.

Marr’s piece, which homes in on the gap between aspirational promises and execution of AI solutions across the retail and financial services industries, offers a sobering reminder to communicators: Being an advocate and early adopter of new tech requires cutting through the advertorial language, asking for metrics and examples from solutions providers, and setting aside time to experiment on the frontend. 

This remains one of the best ways to ensure that your skills as a communicator are central to the strategic growth and scale of your operations–ensuring that any tools you advocate for have been tested to ensure they prioritize safety, accuracy and tangible results.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-11/feed/ 0
Pacific Northwest National Laboratory CCO Amanda Schoch on relationship-based learning https://www.prdaily.com/pacific-northwest-national-laboratory-cco-amanda-schoch-on-relationship-based-learning/ https://www.prdaily.com/pacific-northwest-national-laboratory-cco-amanda-schoch-on-relationship-based-learning/#respond Wed, 24 Jul 2024 10:00:54 +0000 https://www.prdaily.com/?p=343808 Schoch shares lessons learned from a storied career in government ahead of her Ragan panel in Nashville next month. Developing relationships with a learning mindset is the first step toward communicating across departments and functions, a crucial component of any integrated strategy.   While comms leaders struggle to gain the foothold that gives them that omniscience […]

The post Pacific Northwest National Laboratory CCO Amanda Schoch on relationship-based learning appeared first on PR Daily.

]]>
Schoch shares lessons learned from a storied career in government ahead of her Ragan panel in Nashville next month.

Developing relationships with a learning mindset is the first step toward communicating across departments and functions, a crucial component of any integrated strategy.   While comms leaders struggle to gain the foothold that gives them that omniscience and influence, a willingness to learn new skills during moments of uncertainty can tactically guide your career.

Amanda Schoch, Chief Communications Officer at Pacific Northwest National Laboratory (PNNL), understands this better than most. Initially motivated to help improve national security communications after 9/11, Schoch’s career offers tangible examples of how trusting your guiding purpose and nurturing the relationships that feel right can lead you fresh, high-stakes experiences that take your career to new heights.

Schoch shared more about her journey ahead of her keynote panel at Ragan’s Employee Experience Conference in Nashville this August.

This conversation has been edited for length and clarity.

Justin Joffe: Researching before our chat, I didn’t realize that so much of your career has been in government.

Amanda Schoch: Yeah! I’m now a government contractor (at the Pacific Northwest National Laboratory), where I work for Batelle. I’ve been government or government-adjacent my whole career.

And since right out of school, too. Moving from being a legislative assistant to an appropriations associate for the House must have been a huge responsibility to take on that early. Seems like you were informing policy across a vast amount of stakeholder sets.

AS: I worked for individual members of Congress starting with Rodney Frelinghuysen, who was my hometown congressman—he took a chance on me. I knocked on his door many, many times until he offered me an internship – which I took while working at Starbucks to cover my rent and benefits.   The internship only lasted a week or so before I was offered a full-time job on Capitol Hill.

What leadership competencies did you develop during those years?

AS: I read an article once that used the term “everything is figureoutable”, and that really sums up one of

Amanda Schoch, CCO, Pacific Northwest National Laboratory

the lessons I learned working on Capitol Hill which has continued to serve me to this day.  I was 22, right out of school with no real-world experience yet I was advising members of Congress on significant policy that is going to impact the nation. I had to learn how to learn an issue quickly, find the experts, and get smart on the topic fast so that I could provide the context, advice, and counsel.

Not knowing something is an opportunity to learn and grow. I think that is incredibly relevant to communicators because we’re translators for our organizations. We are rarely the subject matter expert (SME). Seeking out the SME, building trust, treating their knowledge with care and translating that information is a skill I developed early in my career.

That intake process is so important. A lot of people rush through it, which is different from doing it efficiently. You can do it efficiently and do your due diligence. But putting something off because it’s time-consuming and their list is already so vast. But research and education take time. What helped you absorb and retain that learning?

AS: People learn information in different ways.  I learn best when I hear something and can have a conversation,   which is why relationships are so important and not transactional to me. When approaching a new area, I find someone who knows the issue, pick up the phone, and start a conversation.

Next you move to the Office of the Director of National Intelligence (ODNI), and that seems like a big shift. Why did you get into the intelligence sector?

AS: I graduated college right before 9/11. While he was not in the World Trade Center or working there at the time, my dad spent the majority of his career in the World Trade Center. Watching the towers fall had a profound impact on me and drove me to a career in policy and national security.

I sought out national security positions on Capitol Hill to build my expertise, and I was ready for a more regular schedule that would be more conducive to starting a family, which drove me to make the jump from the legislative to the executive branch.

It seems like the next role also let you flex your chops as a strategist. Rethinking and redesigning the human capital program must have been a huge feather in your cap.

AS: It did.  The program I led was created to foster integration across the intelligence community after 9/11.  It required intelligence community senior staff to do a two-year tour at a different agency to foster collaboration, understanding, and networking across the community’s 17 agencies.   We saw an opportunity to increase the effectiveness of the program by expanding it to junior staff, fostering that networking and collaboration earlier in a staff member’s career.

We had to work across the 17 agencies to bring that change to fruition and we had setbacks along the way.  I learned that the best strategies keep the outcome as the focal point but are flexible and adaptable in how you reach that outcome.

Running that program also inspired me to take on a tour in another agency. After years of sharing the importance of the program, it was time for me to walk the walk and put myself in a place where I’m less comfortable, had to learn, and stretch myself and my skills.

Well, it seems to have paid off. In three years you go from Deputy Strategist to Chief of Communications for Operations. Is that a C-suite title? Government roles are strange.

AS: It was a wild ride going from Deputy Strategiest to Chief of Communications for Operations.  I served on the team that led an agency-wide reorganization.  Part of that reorganization was re-engineering NSA’s communications function in the wake of some high visibility leaks.

One of the things we learned from those leaks is the importance of communications – if an organization isn’t telling its own story it creates a vacuum for someone else to tell it for you.  This created a challenge for a community and an agency that had a legacy of operating in secret.

One of the issues  I focused on during that reorganization was ensuring the communications function at the NSA was positioned to tell the agency’s story in a more robust way, while still protecting sources and methods and its classified work. that led me to the opportunity to run the communications team for NSA’s Operations directorate.

And then after years of running that, you moved back to the ODNI. Why?

AS: One of my mentors and role models asked me to come back. I had been thinking about staying at NSA when the call came in. My trust and respect for her decided to return an easy one to make.

What a great example of the staying power of healthy working relationships, to your earlier point.

AS: Absolutely. I think there are those people that you so align with from a values and a mindset perspective that when they see an opportunity, you trust the opportunity is there.

One of the things I’ve trusted in my career is that saying yes to an opportunity will always open the next door. Saying no may feel safe but saying yes often leads to places you can’t yet imagine for yourself. So I’ve always played it a little bit unsafe, always leaned into where I’m a little bit uncomfortable and a little bit excited.

It paid off. A year later you became the CCO! What can you share about that transition from both a competency and a remit perspective?

AS: Well, I’ve always felt a little bit uncomfortable and often like a bit of an impostor. I finally got to a moment in my career where I realized that taking the next step is always going to be scary and that most people feel that self-doubt, but you shouldn’t let that hold you back.

I also knew I wasn’t in it alone – I had an exceptional team to rely on. So  I focused on two things maintaining a strong team and setting a vision – then I trusted that team to execute.

So what changes after you hold this CCO role for three and a half years? Why did you move to PNNL?

AS: The run-up to the 2020 elections was an intense period.  I found myself spending most of my time focused on crisis communication and reactionary press management.  This was exacerbated by the pandemic which forced us to operate with reduced staff.  Frankly, I was struggling with burnout.

Post-election I focused on helping the new leadership team get settled, and ensuring my team was in a strong position.   When the Pacific Northwest National Lab reached out to me with an offer to help them leverage communications to advance the Lab’s mission and strategy, I was ready to say yes.

I’m passionate about the potential for communications to be a strategic lever in achieving business outcomes. I like to to step back, see the strategic landscape and identify where communications can bring the biggest value to an organization. PNNL offered me an opportunity to lead an incredibly talented and diverse team and lean into that strategic muscle again.  lea.

You speak of that balance between being a strategist and a doer, but all comms executives sometimes still have to roll up their sleeves. Maybe it’s about having the space and time to create redundancies that help your team act at your behest, giving you an omniscience across functions. That’s not something every comms leader has.

AS: Yes, and I have an amazing team that is better at their craft than I am. I try to focus on removing barriers for them and then getting out of their way.  That way I can spend time on where I bring unique value – seeing the big picture, setting the strategy and vision, connecting dots between efforts and ensuring that our communications efforts are aligned to the Lab’s strategic outcomes.  deeply understand where

You’re joining a Ragan panel of fellow C-Suiters in Nashville next month to talk about solving employee disengagement. How have you thought about employee engagement in your roles?

AS: People power our institutions and organizations. As communicators, we have a critical role in engaging and inspiring our workforce as well as attracting future staff.  We see ourselves as “sensemakers” – simplifying and rationalizing communications for our staff so they know what is most important and how they fit into the broader organization.

We focus on our audience – our staff – and make sure our communications are written in ways that meet them where they are.  We put people front and center in our communications – people connect and trust other people, not companies – so we constantly look for ways to humanize our communications.  And we track metrics to understand which communications and tactics are resonating and then leverage that insight to adapt our plans and approaches.

Amazing. Any parting wisdom to share?

AS: We have three new interns who just started this week and they asked me for one piece of advice I would give them. I said, ‘If you have the opportunity, take the leap.’ That’s always served me. I never had a long-term vision for my career.  I got here by saying yes when an opportunity presented itself.

Join Houston in conversation with other C-suite leaders this August at Ragan’s Employee Experience Conference in Nashville.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

The post Pacific Northwest National Laboratory CCO Amanda Schoch on relationship-based learning appeared first on PR Daily.

]]>
https://www.prdaily.com/pacific-northwest-national-laboratory-cco-amanda-schoch-on-relationship-based-learning/feed/ 0
AI for communicators: What’s new and what’s next https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-10/ https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-10/#respond Thu, 11 Jul 2024 10:30:38 +0000 https://www.prdaily.com/?p=343650 China rises, the U.S. hesitates on regulation and more. This week, a Shanghai AI conference raises new questions about the global AI race, a former FCC chair calls out slow federal regulation efforts while California’s efforts advance, and CIO salaries increase. Read on to find out what communicators need to be aware of this week […]

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
China rises, the U.S. hesitates on regulation and more.

This week, a Shanghai AI conference raises new questions about the global AI race, a former FCC chair calls out slow federal regulation efforts while California’s efforts advance, and CIO salaries increase.

Read on to find out what communicators need to be aware of this week in the latest chaotic, captivating chapter of AI’s growing influence.  

Tools and advancements

Last week’s World AI Conference in Shanghai featured Chinese tech companies showcasing over 150 AI-related products and innovations, with a handful of foreign companies like Tesla and Qualcomm participating, too.

Notable among the unveiled tech was an advanced LLM from SenseTime called “SenseNova 5.5,” reports Reuters. The company alleges this model will rival ChatGPT-4 in its mathematical reasoning abilities. 

The conference took place just before as OpenAI was banned in China, according to The Guardian

OpenAI has not elaborated about the reason for its sudden decision. ChatGPT is already blocked in China by the government’s firewall, but until this week developers could use virtual private networks to access OpenAI’s tools in order to fine-tune their own generative AI applications and benchmark their own research. Now the block is coming from the US side.

Rising tensions between Washington and Beijing have prompted the US to restrict the export to China of certain advanced semiconductors that are vital for training the most cutting-edge AI technology, putting pressure on other parts of the AI industry.

But executives like Zhang Ping’an, who leads Huawei’s cloud computing function, seemed unfazed at the conference.

“Nobody will deny that we are facing limited computing power in China,” Zhang said, according to Reuters. “If we believe that not having the most advanced AI chips means we will be unable to lead in AI, then we need to abandon this viewpoint.”

 

 

Meanwhile, a new poll conducted by the AI Policy Institute and shared with TIME found that American voters would rather the U.S. worry less about keeping up with China’s innovations and focus more on responsible AI.

Time reports:

According to the poll, 75% of Democrats and 75% of Republicans believe that “taking a careful controlled approach” to AI—by preventing the release of tools that terrorists and foreign adversaries could use against the U.S.—is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.” A majority of voters support more stringent security practices at AI companies, and are worried about the risk of China stealing their most powerful models, the poll shows. 

But new technology is coming, even with a cautious approach. MIT Technology Review shared a deep dive into an evolving category of AI assistants, dubbed “AI agents.” 

While the definition of AI agents is somewhat ambiguous, Nvidia’s AI agents initiative lead Jimmy Fan described them as tools that can make decisions autonomously on your behalf. In one of MIT’s examples, an AI agent functions as a more advanced customer service bot that’s able to analyze, cross-reference and evaluate the legitimacy of complaints.

According to MIT Technology Review:

In a new paper, which has not yet been peer-reviewed, researchers at Princeton say that AI agents tend to have three different characteristics. AI systems are considered “agentic” if they can pursue difficult goals without being instructed in complex environments. They also qualify if they can be instructed in natural language and act autonomously without supervision. And finally, the term “agent” can also apply to systems that are able to use tools, such as web search or programming, or are capable of planning. 

Other AI tools continue to show potential for making all sorts of creative assets the average person may not be able to produce on their own. Consider Suno, an AI music creation app that allows you to create AI-generated tunes and is, as of this writing, still free to download (the startup was sued in June by a handful of record companies, so hard to say how long that will last).

Washington Post tech reporter Chris Velazco recently detailed his experience with Suno, asking it to make a journalism-themed song for the storied paper that could serve as an update to John Phillips Souza’s iconic “Washington Post March.”

While a transformative tool like Suno has untold utility for communicators and marketers to create bespoke assets on the fly, looming legal threats raise the question of who will ultimately be holding the bag when the models that harvest copyrighted material are sue – the software companies or their clients? 

The popular collaborative design tool Figma recently disabled its “Make Design” feature after it was accused of being trained on pre-existing apps, reports TechCrunch.  YouTube also rolled out a policy change last week that lets people request AI-generated deepfakes that simulate their face or voice be removed, reports Engadget. 

The Wall Street Journal reports that many AI companies are focused on growing their customer base with lower-cost, less powerful models.

There’s even a specific use-case for internal communications: 

The key is focusing these smaller models on a set of data like internal communications, legal documents or sales numbers to perform specific tasks like writing emails—a process known as fine-tuning. That process allows small models to perform as effectively as a large model on those tasks at a fraction of the cost. 

Risks and regulation

Former FCC Chairman Tom Wheeler appeared on Yahoo Finance’s “Catalysts” show this week to say out that the U.S. has failed to lead on AI regulation efforts.

Yahoo Finance reports:

He notes that the EU and individual states in the US have their own set of rules, “which will create new problems for those tech companies who rely on a uniform market.” He explains that the way Europe regulates tech companies is a “much more agile approach” than industrial-era micromanagement, and it is an approach that “continues to encourage innovation and investment.” However, he believes the US will have a difficult time getting to that point as it grapples with “a Congress that has a hard time making any decisions.”

Some states are moving forward with their own regulations in the absence of national guidance. The AP reports that last week, California advanced legislation requiring AI companies to test their systems and include safety features that protect the tools against weaponization. 

But not everyone supports the regulation. 

According to the AP:

A growing coalition of tech companies argue the requirements would discourage companies from developing large AI systems or keeping their technology open-source.

“The bill will make the AI ecosystem less safe, jeopardize open-source models relied on by startups and small businesses, rely on standards that do not exist, and introduce regulatory fragmentation,” Rob Sherman, Meta vice president and deputy chief privacy officer, wrote in a letter sent to lawmakers.

Opponents want to wait for more guidance from the federal government. Proponents of the bill said California cannot wait, citing hard lessons they learned not acting soon enough to reign in social media companies.

The proposal, supported by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices.

It’s then no surprise that  AI-related lobbying reached a record high over the past year, according to Open Secrets. 

Those hoping for some clarity or resolution would do best to work with your organization’s leaders across functions and codify your own set of responsible AI guidelines instead of waiting for the government.

AI at work

Misuse of AI doesn’t just affect external stakeholders, but the workforce, too. CIO reports on the rise of shadow AI, a term referring to unsanctioned AI use at work that puts sensitive company information and systems at risk.

The report cites numerous risks of unsanctioned AI use, including unskilled workers feeding tools sensitive data, disruptions among workers with various levels of competency and legal issues.

Suffice it to say, training employees at all levels on responsible and acceptable use is a sound solution. 

The Wall Street Journal reports that chief information officers are seeing previously unheard-of compensation increases for increasingly taking on AI-related responsibilities. 

According to WSJ:

CIO compensation increased 7.48% on average among large enterprises and 9% among midsize enterprises over the past year—the biggest gains among information-technology job categories, according to salary data from consulting firm Janco Associates. 

Overall, CIO and chief technology officer compensation is up more than 20% since 2019, with boosts to base pay and, more often, equity packages, according to IT executive recruiting firm Heller Search Associates. 

The gains indicate growing investment by enterprises in AI strategies and corporate tech leaders becoming more visible as they are increasingly tasked with new AI-related responsibilities.

“ChatGPT caught a lot of incumbent technology companies by surprise, and no organization wants to be left behind,” said Shaun Hunt, CIO of Atlanta-based construction services firm McKenney’s. 

It’s clear that AI is changing the game – for better and for worse. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-10/feed/ 0
Southwire’s Fernando Esquivel on balancing people and process https://www.prdaily.com/southwires-fernando-esquivel-on-balancing-people-and-process/ https://www.prdaily.com/southwires-fernando-esquivel-on-balancing-people-and-process/#respond Mon, 08 Jul 2024 10:00:42 +0000 https://www.prdaily.com/?p=343605 The organization’s Chief People and Culture Officer charts his journey toward developing a formula for business success with employees as its heart. As one of the world’s largest manufacturers of cable and wire, Southwire likes to call itself one of the largest companies you’ve never heard of. The products it makes are crucial to American […]

The post Southwire’s Fernando Esquivel on balancing people and process appeared first on PR Daily.

]]>
The organization’s Chief People and Culture Officer charts his journey toward developing a formula for business success with employees as its heart.

As one of the world’s largest manufacturers of cable and wire, Southwire likes to call itself one of the largest companies you’ve never heard of. The products it makes are crucial to American infrastructure and the tech industry, as are the people who make the business run. With decades of leadership roles under his belt, Southwire Chief People and Culture Officer Fernando Esquivel understands this well, and his strategies are shaped by a strong sense of the ways people drive productivity.

But Esquivel attributes the beginning of his people-first business acumen to his time at a university in Mexico serving as president of the Student Association of Mechanical and Electrical Engineers. While he knew about mechanical and electrical engineering, he also noticed the curriculum didn’t have much to offer on interpersonal management skills.

“I had my best friends around me — a group of eight people that are still very good friends — but there was conflict,” Esquivel remembers. “That’s when I first realized that I needed to learn more about the human aspect of management. Because at the end of the day, it was in my face when my best friends were dealing with conflict. In order to make a decision, there’s no formula for that. It’s dialogue and it’s understanding where everyone’s speaking from. That was my first realization that I needed to get more into the people space.”

Esquivel shared more about his journey ahead of his keynote panel at Ragan’s Employee Experience Conference in Nashville this August.

Exposure to the business of numbers 

In 1988, Esquivel was invited to apply for an internship with P&G. While mechanical and electrical engineers normally focused on manufacturing, Esquivel told them that his main interest was personnel, followed by research and development. That caught their attention.

Fernando Esquivel, EVP, Chief People and Culture Officer, Southwire

At that time, engagement surveys were still in an incubation state, and there was one that was taking place in Mexico,” recalled Esquivel. “As an engineer, you solve for problems, so they gave me a stack of papers. We didn’t have computers, and we had to find a way to pretty much account for all of that in the fastest way, then analyze the information and come up with trends and graphs.”

Esquivel eventually joined a rotational program as a full-time employee, piloting a program that provided him with four months in training and development, four months in industrial relationships, four months in total rewards, then time in recruiting, before getting to choose where he wanted to go.

This was the first time that Esquivel was exposed to the world of business through the lens of numbers. He credits good teachers and mentors, who trusted him to look at things differently, empowering him to grow and succeed across these disciplines.

“I said, ‘Look, I’m an engineer. If I don’t go to the manufacturing side of the equation and only spend time in administration, I will never get to know the experience of being an engineer.’”

This ultimately meant taking a move that was not lateral, but organizationally downward. It also meant that Esquivel had a chance to experience the business as a line manager without his tie, portfolio and shiny shoes. “I took my safety shoes, my helmet, my jeans and my polo shirt and learned from operations and production,” he said.

“Through this experience, I learned very fast that you need to win the hearts and minds of people. How do you do that? Walking the aisles, getting on the line and learning the job yourself. Once you start being a part of the team … gee, I mean, it’s the best thing that can happen to you. It’s hearts and minds.”

The transformation process

That experience on the floor taught Esquivel what a production environment is all about, something he considers one of the best decisions he’s ever made. All these years later at Southwire, he’s comfortable being in the plant and knowing what to look for.

“I know about productive maintenance,” explained Esquivel. “I know about operations. I know about downtime and, obviously, safety. That’s one of the reasons I’m at Southwire. The transformation process is fascinating to me. The raw materials go from one door and the final product goes through the other side.”

During his years as a line manager at P&G, Esquivel helped operators learn what they need to run at target efficiency percentages.

“We had meetings and for some, they didn’t understand the formula,” he remembered. “So, I started spending time with my team on the math, the fractions and drawing the connection to the difference in outputting eight cases per minute versus six cases per minute. The pivotal thing that happened is that it went beyond the job and the target production numbers — some of them even started asking me questions about the homework of their kids.”

This created a connection that also improved things from a production standpoint.

“I was supporting them as they were supporting the supervisor,” said Esquivel. “It’s all through people. And once your people trust you, it’s a different game.”

Looking ahead while across the region

As P&G began implementing total productive maintenance, Esquivel was invited to join a Mexico-based team that trained facilitators across the Latin America (LATAM) region. At just 24 years old, it brought him to places like Brazil, Venezuela, Panama and Costa Rica.

That was a very significant as part of my professional formation and learning about consistency because P&G kept the same standard no matter the country,” he said. Planning these trips to deliver on objectives taught him the importance of looking ahead.

For that, you need to see several quarters ahead of you,” explained Esquivel.

“You need to understand the national holidays in each country, the plan and what you are solving for What do we want to achieve — increase efficiencies, maximize our throughput, and then do it within the boundaries of safety. The goal is to be proactive versus reactive. You get all the business variables, but then you have to build a plan on how to really deliver over time in a multi-country region.”

Esquivel carried this with him when he joined Microsoft as the LATAM HR Director in 2001. When he became the Asia Pacific HR Director in 2004, moving to Asia immersed him in new and more diverse cultural perspectives.

“You have all the religions, vegetarians to beef lovers, Victorian football to baseball and everything in between,” he said. “I was exposed to a microcosm of the world.”

Growing the region was an exercise in consistency and staying close with business leaders. Esquivel underscored the partnership he had with the CEO, CFO and General Counsel across different regions as key to honing business acumen while driving people and culture strategy. These relationships contributed to his ability to speak the language of numbers, especially understanding P&L and the balance sheet.

A formula for success

As an engineer, Esquivel has a formula for success:

(A+I+E) x P = S

“A” is anticipation, “I” is innovation, “E” is execution.

“How much can I anticipate?” Esquivel regularly asks. “We do this all the time since I’ve come to Southwire. What’s coming our way in the next two quarters? I like to talk about what’s going to happen in Q3 during Q2, and so on.”

When Esquivel talks about innovation, he focuses on challenging the status quo.

“It’s built in me because I have been provided with the opportunity to ask, ‘Why are we doing it this way? And how can we do it differently?’” he said. “Sometimes it’s innovations, sometimes it’s simplifications.”

These are all multiplied by “P” for passion, because Esquivel believes you have to love what you do.

“My formula for happiness would be exactly the same, but at the end I’d add ‘QR’ for the quality of relationships,” he said.

Applying the lessons every day

Ultimately, Southwire trusting Esquivel to lead on culture is also about his trust for the organization, a testament to the power of business as a universal language and a unifier during a time when there aren’t many easy answers.

“It also speaks about the quality of the company that I’m part of because they work with me similarly,” Esquivel said. “They open their arms and are open to listen.”

Esquivel understands that he has a role to play, just like the person working next door and the person working downstairs.

“How do you go from that moment of strategizing to implement and deploy? In the middle, you have to activate hearts and minds, he said.

“Meet the people where they are by understanding their needs and what they are looking for, communicate yours and ask for help. It’s the four Cs — connect, communicate, collaborate and celebrate.”

Join Esquivel in conversation with other C-suite leaders this August at Ragan’s Employee Experience Conference in Nashville.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

The post Southwire’s Fernando Esquivel on balancing people and process appeared first on PR Daily.

]]>
https://www.prdaily.com/southwires-fernando-esquivel-on-balancing-people-and-process/feed/ 0
General Mills CCO Jano Cabrera on adapting strategy to the business landscape https://www.prdaily.com/general-mills-cco-jano-cabrera-on-adapting-strategy-to-the-business-landscape/ https://www.prdaily.com/general-mills-cco-jano-cabrera-on-adapting-strategy-to-the-business-landscape/#respond Wed, 26 Jun 2024 10:00:50 +0000 https://www.prdaily.com/?p=343549 Cabrera shares lessons learned throughout his illustrious career in politics, brand and agency comms. The most effective communications leaders understand that there’s wisdom in working on both the agency and client side of the aisle —and that both have room for creativity when you consider the needs of the business first. Jano Cabrera, CCO of […]

The post General Mills CCO Jano Cabrera on adapting strategy to the business landscape appeared first on PR Daily.

]]>
Cabrera shares lessons learned throughout his illustrious career in politics, brand and agency comms.

The most effective communications leaders understand that there’s wisdom in working on both the agency and client side of the aisle —and that both have room for creativity when you consider the needs of the business first.

Jano Cabrera, CCO of General Mills, understands this better than most. Cabrera’s path to the C-seat was paved during an illustrious career that included serving as deputy press secretary for Vice President Al Gore and the national spokesman for the Democratic National Committee, serving as a chair and VP at Burson (then Burson-Marsteller), and  becoming the SVP of corporate relations at McDonald’s.

Cabrera caught up with Ragan and shared his thoughts on embracing the global language of finance, the power of choosing what not to say, and the importance of adopting a ‘fewer, bigger, better’ approach to strategic communications campaigns. He’ll share more during our webcast with The Conference Board, “What Initiatives Are Communications & Marketing Taking for the Elections?” on June 25th.

This conversation has been edited for length and clarity.

Justin Joffe: When, in the first political communications chapter of your career, do you start to understand behavioral trends and marry them to communications styles? Was there a moment when that clicked for you?

Jano Cabrera: Yeah. I learned this lesson very early on. It’s carried through from political war rooms into corporate boardrooms. I was young when I was working on political campaigns and had a responsibility well beyond my years of being a spokesperson, first for people on the Hill and then later for the Vice President of the United States. It’s a big responsibility.

Now, when people hear that, they think that’s the guy or gal speaking on behalf of the person to a journalist. And that’s true, but that’s just part of the job —  the door swings the other direction as well. I needed to be able to explain, ‘This is the interview you’re about to do. It’s with this person, and here’s what I think they’re going to ask.’ Early on, I focused way too much on the what — you know, ‘This is an interview about your recent announcement on livable communities.’

Very quickly, I learned I was doing a disservice to my principal. Most of the time, journalists come in with a question or two on a topic they’re likely going to write about, but they have so much more they want to write about. So the job is really to see what else they have written. What is your theory of the case? Less ‘what’ and more ‘why’.

Why do they want this time with the principal? Where are the dangers and pitfalls that they might have started by asking a question about x but because y just happened in the political environment? You now need to be ready to answer that question.

It’s shifting your mind from ‘Here are the basics’ to ‘Here’s the strategic landscape that we’re playing in’. That helped me immensely because it meant that every time I had a conversation with a principal early on in my career, I wasn’t focused on what we were doing but why we were doing it—the context, the strategic landscape in which we were doing it.

Once you reach this level in the C-suite, of course, it’s the same. What’s the landscape that the business finds itself in?

Omniscience requires some cross-functional visibility, though. We know that comms are dot connectors. We know that strategic comms incorporates every function and business unit in harmony. But the visibility doesn’t always happen.

JC: Yeah, I couldn’t agree more. I often say that we all speak multiple languages, even if you just think you just speak English. That might be the sole language you’re thinking of, sure, but you also speak the language of the zeitgeist. I can make the sound from “Law and Order” and you’re already hearing it in your head. I’m hearing it. There are these similarities that we share.

There’s another language that we should all speak, which is the language of finance. And the reality I’ve found over the course of my career is that communicators rarely speak the language of finance. If you say CAGR and get a blank look, that’s okay. Because most of the time they’re just getting stuff from IR or the finance team.

They’re getting curated language, learning these terms only when they are already a place in the narrative instead of shaping the narrative with their knowledge.

JC: That’s exactly right. And that means there’s a limit as to how strategic you can be. If you literally can’t speak the same language as the person you’re there to provide service to, if what they’re saying sounds like a foreign language to you, what possible service could you provide? There is a book that I recommend to everybody called “The 90-Minute MBA” and it’s going to give you the language of MBAs.

When you go into most C-suites, that’s the language they’re speaking. The GC is going to speak the language of law school and MBA. Even if they’re not MBAs, they will by osmosis learn that language. There is some overlap between the two. But communicators have to go out of their way.

It seems like business acumen isn’t just a bridge across functions, but also universal language when managing a global stakeholder set. When you’re managing multiple regions, is the language of finance a unifier?

JC: Absolutely. I would have a very different answer if we were talking about communications in the political space, because that’s a different language. But if you’re talking about communications in the corporate space, the language of business is business. And the business language is MBA language, MBA-ese, if you will. You don’t have to become entirely fluent, but you’ve got to get better than most communicators are when they start.

I’m intrigued to know how that followed you out of politics. You were the National Spokesman for the DNC, then the Communications Director. But then you move to the agency side as Burson Marsteller (now Burson) . What made you want to go to the agency side?

JC: Unfortunately, there is always the law of diminishing returns when it comes to doing something you genuinely love. And I love politics. But it ages you in dog years. You have to give more than 12-hour days practically 365 days a year. It’s exhausting, so I wanted a break.

Jano Cabera, CCO, General Mills

But I loved the skill set of communications. What’s similar? Corporate consulting. The gift that I took from one space to the other is that, most of the time when people think of a campaign discipline, they think, ‘I’m gonna say the same thing over and over again until not only is my audience sick of hearing it and I’m sick of saying it.’

Sure, that’s one aspect. The other aspect of campaign discipline is choosing what not to say. And being very focused really does require you to say, ‘Look, there’s so much I could touch on, but I’m going to I’m going to focus on these things because I just want to be known for them.’

Once I started consulting, one of the things that I would often find with clients is, ‘Here’s the problem that you hired us to solve, here’s a campaign here’s, how we’re going to do it.’ To the extent that it needed to evolve through conversation, it would. The client mindset is very important.

But sometimes our best counsel was, ‘That’s actually not a part of this.’ Like if you, Intel, want to be known as like American jobs manufacturer and get credit in DC for the billions of dollars that you pumped into the economy, I understand that you also want to talk about STEM—but that’s a separate campaign. Those two things might run in parallel, but don’t try to mix them because there are different messages.

Now that I’m client side, it’s even more important, because you have limited time, limited budget, limited time for your team,  and executives. You have to pick and choose the fewer, bigger, better.  That is the lesson—fewer, bigger, better matters.

When it comes to communications, pick your shots, fund them well, resource them as appropriate and put those chips on the table. If you try to water it all down and bet everywhere, the house is gonna win. Nothing’s gonna break through.

So you’re at Burson for over eight years, and then you go to McDonald’s as an SVP of Corporate Relations. It seems like you wanted to get back to the client side and get to focus on one thing.

JC: When I think of my career, I think of it in chapters. The first big chapter was politics and government. The second big chapter was me trying to figure out what do I want to do next and finding that I genuinely like corporate consulting.

And the third big chapter, which I’m in now, is the client side. I always felt that I was missing something because if I’m advising as I once did at Microsoft, Wells Fargo, Bank of America or whatever the client may be, there was a part of me that kept thinking ‘I actually don’t know what it’s like to be in your shoes.’ I’m giving you great counsel as I see it because there’s so much exposure from working at big agency. I see so many decks, I have so many conversations and horizontal thinking can help you.

Once I went client side, I got it. On the agency side, you’re always coming up with big ideas, you’re putting them forward and you’re frustrated. But if you really want to know what the client side is like, all you have to do is watch “Game of Thrones.” It is much more about the politics of whatever the culture is that you’re residing in. And to be clear, I’m not saying that every corporate culture is toxic.

But brand tribalism is a real thing.

JC: Absolutely. It’s understanding how you speak to the stakeholder. You might have an idea, but in order to advance anything you’ve got a win over everyone. I used to watch CCOs ask for a deck and be really passionate about an idea. And then it just went into the ether, just dissipated over time, and I was curious why couldn’t it come to be. Now that I’m on the client side, I understand—you might have a lot of passion for idea, but then you might have an agency develop a deck for you.

Sounds like there’s a player/coach piece to this too. I’ve always been surprised at how willing you are to roll up your sleeves and get involved in the process — frankly, that’s not something everyone in the C-suite makes the time to do. It sounds like this shift was similarly an extension of you wanting to do the work.

JC: 100%.  I am now doubly happy. I genuinely enjoyed the agency side and I love that chapter of my career. But now I understand that world and I understand the corporate world. And I feel like it’s allowed me to unlock the power of both. I can turn to an agency partner and say, ‘This is the deck I need, this is what I want and internally, I’m now going to unlock all these doors to the best of my ability.’ So what I envision and work with an agency on I can actually make a reality.

I think everyone over the course of their career has worked with someone who has skipped a step. They somehow get to a position of power, but they don’t actually understand or appreciate how it all works. And that’s very painful.

My father was in the military, and I still remember him saying that if you think about a general, they can be lazy or hardworking. They can be smart or dumb. But the best general is the one who’s hardworking and smart. Now you might think that the worst one is lazy and dumb—but it’s actually someone who’s hard-working and dumb, because they’re causing a lot of chaos. They’re working hard, but not really connected or understanding the waters they’re in.

This column is about ‘seizing the C-seat’, so I’d be remiss to not ask what lesson resonates loudest since you became CCO at General Mills after your time at McDonald’s.

JC: There’s so much that communicators can imagine, because by nature, because we’re creative people by nature. The problem is that if you’re creative, if you can imagine something that is awesome and wonderful, you think it can provide value.

But if what you’re imagining is somehow divorced from the business, and you haven’t got that buy-in from the C-suite?  You can come up with a campaign that wins a lion at Cannes and your business leaders would say they don’t care. Or even worse, ‘How much did you spend on that?’ Creativity can be in service of the business needs.

Justin Joffe and Jano Cabrera will continue the conversation during our webcast with The Conference Board, “What Initiatives Are Communications & Marketing Taking for the Elections?” on June 25th.  

The post General Mills CCO Jano Cabrera on adapting strategy to the business landscape appeared first on PR Daily.

]]>
https://www.prdaily.com/general-mills-cco-jano-cabrera-on-adapting-strategy-to-the-business-landscape/feed/ 0
Partnership is allyship: NYC Pride’s Sandra Perez on meaningful Pride integrations https://www.prdaily.com/partnership-is-allyship-nyc-prides-sandra-perez-on-meaningful-pride-integrations/ https://www.prdaily.com/partnership-is-allyship-nyc-prides-sandra-perez-on-meaningful-pride-integrations/#respond Mon, 17 Jun 2024 10:00:29 +0000 https://www.prdaily.com/?p=343379 Perez keynoted The PR Museum’s “Pride, Prejudice and Politics” virtual event. This year’s Pride celebrations are proving to be high-stakes and serious in tone during a contentious election season, and at a time when The Wall Street Journal reports an increase in activist, ‘anti-woke’ shareholders scrutinizing inclusion programs and donations to LGBTQ+ groups. 2024 also […]

The post Partnership is allyship: NYC Pride’s Sandra Perez on meaningful Pride integrations appeared first on PR Daily.

]]>
Perez keynoted The PR Museum’s “Pride, Prejudice and Politics” virtual event.

This year’s Pride celebrations are proving to be high-stakes and serious in tone during a contentious election season, and at a time when The Wall Street Journal reports an increase in activist, ‘anti-woke’ shareholders scrutinizing inclusion programs and donations to LGBTQ+ groups.

2024 also marks the 40th year that NYC Pride took over as the official organizer of the city’s Pride celebrations, following the dissolution of the Christopher Street Liberation organization.

During the PR Museum’s “Pride, Prejudice and Politics” event last week, NYC Pride Executive Director Sandra Perez explained how intentional language has framed Pride’s positioning from the outset.

“We call it a march, not a parade, because we are still very much grounded in the fact that NYC Pride emerged as a consequence of the Stonewall uprising,” Perez began.

“We have deep roots in advocacy and free speech. And for us, internally, we will call it a parade the day we all have the same rights.”

During the rest of her opening keynote, Perez unpacked the state of Pride organizations across the country with a focus on how corporate partners can contribute in a productive, mindful way.

Corporate partnerships support volunteer efforts

Perez acknowledged that New Yorkers are spoiled because they have at least five to six Pride organizations across different boroughs and communities — an abundance that causes some confusion about who they are and what they do.

“The majority of pride organizations are volunteer-led and have very small staff if they have paid staff,” explained Perez, noting that NYC Pride did not have full time staff until 2016.

“That spirit runs very strong throughout our organization and most pride organizations. So when we are dealing with corporate partners, let’s say that they are surprised — usually at our size and the fact that we somehow manage to do what we do.”

Perez remembered the 50th anniversary of The Stonewall Riots in 2019, recalling the strong corporate presence that has since dwindled.

“We had this very highly visible moment that was captured nationwide in New York,” said Perez, “and I think it was fixed in people’s minds that Pride is about corporations, about large-scale big extravaganzas. That is probably the exception rather than the rule.”

Reckoning with reluctant partners

The Stonewall anniversary happened just before a global pandemic changed the world and the rise of a Black Lives Matter movement forced the LGBTQ+ community to address a perceived lack of diversity in some circles.

“Black Lives Matter forced us to reckon with whether or not we have been as inclusive as we should be,” Perez said.

“We’ve come into this from very changed landscapes, and we’ve seen an erosion of some corporate support. We also struggle on a baseline level to secure funding from those who would really benefit from our labor. Pride is not only a movement, it is also an economic driver.”

While Perez urged those in attendance to stay the course, she also acknowledged that “some corporate partners are receding, while other long-term partners have found themselves in challenging situations regarding the public’s perception of how they support our community.”

Many partners are examining their long-standing program, while other funders who are still happy to give money don’t want it publicly acknowledged.

Recognizing ERGs is an intersectional opportunity

This tension weighs on the Pride movement, made up of a community that understands why visibility matters and knows that corporate partners can make a positive impact. For them, this work must also start with engaging allies and advocates internally.

“Some of the tension we’re seeing with our corporate partners is really about lifting up and continuing to support their employee resource groups (ERGs), but at the same time, not wanting to publicly be identified with supporting these sort of endeavors,” Perez said.

Most of the companies that NYC Pride works with have strong ERG programs that represent a litany of identities and how they co-exist.

“So they are very deeply committed to exploring not only the LGBTQ community, but are very much vested in also exploring the intersections at which we exist. I am not just a lesbian woman — I am also a Latina, I am a parent, I am a woman of a certain age, and all of those things factor into how I live and work.”

Through this lens, Perez finds that some the best corporate partners understand the intersectionality that everyone brings to this work and how it can benefit them.

“The ones that are receding are the ones who do not have a point of view,” she said. “And quite frankly, are not interested in doing anything beyond the month of June.”

Partnership is allyship

Perez encouraged anyone who considers themselves an ally to ask questions about their ERGs internally.

“At Pride, we have really pushed our partners, and when I say partners, that’s what we want,” she said. “We get sponsors, and that’s fine. It’s a transaction, it’s good, we’ll take the money. But the real value is in partnerships.”

Mindful partners are allies because they will work with Pride organizations and other advocacy groups throughout the year, not just in June.

“We have kids who are in crisis, who need to see us visible all year long. We can’t disappear,” Perez said, adding that this is where communicators play a key role.

Making the most of the spotlight in June often means talking about volunteerism and finding out what your employee volunteer program can do to lock into advocacy work year-round.

“You can connect around that so it’s about finding that common language,” continued Perez.

“Because we are siloed within our corporations, or we’re just viewed as this one-month-a-year segment of the market, they miss out on the fact that we are talking about this all year long. We are working with creatives, with entrepreneurs. We’re talking to the business community about what comes next.”

A symbiotic partnership can also back up companies when they are attacked for supporting LGBTQ+ advocacy. Perez cited NYC Pride’s “Patrons of Pride” program, wherein companies agree to give NYC Pride a portion of their proceeds for June, as an example of this relationship.

After one organization that is based in a red state was attacked for donating to NYC Pride, the community rallied behind them.

“We were like, OK, let’s go to Instagram, lift up this patron and say, ‘Hey everybody, they got flack for supporting us. We need you to show them some love. This is their link. This is where you can go to support this company,’” remembered Perez.

“That’s a direct action we can take on behalf of the partners that we have.”

The post Partnership is allyship: NYC Pride’s Sandra Perez on meaningful Pride integrations appeared first on PR Daily.

]]>
https://www.prdaily.com/partnership-is-allyship-nyc-prides-sandra-perez-on-meaningful-pride-integrations/feed/ 0
AI for communicators: What’s new and what’s next https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-8/ https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-8/#respond Thu, 13 Jun 2024 10:00:39 +0000 https://www.prdaily.com/?p=343358 Apple’s major expansion into AI dominates the headlines. This week, Apple made a huge step forward in its own AI journey – and likely toward democratization and expanding the use of AI by everyday people. California is also tired of waiting for the feds to regulate AI and is stepping up to the plate.   Read […]

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
Apple’s major expansion into AI dominates the headlines.

This week, Apple made a huge step forward in its own AI journey – and likely toward democratization and expanding the use of AI by everyday people. California is also tired of waiting for the feds to regulate AI and is stepping up to the plate.  

Read on to find out what communicators need to be aware of this week in the chaotic, promising world of AI. 

Tools and Advancements

Apple this week tried to catch up on the great AI race of 2024 – and in the process took some of the biggest steps toward integrating Ai into daily life. 

The next iteration of the iOS operating system will be stuffed full of AI features, including:

  • Assistant Siri will get smarter, thanks to AI, able to carry out more tasks using natural language, carry over commands between tasks (for instance, “text that picture Gary emailed me to Juan”) as well as perform all the expected generative AI tasks like rewriting your emails, summarizing your notifications and more. Siri will be able to understand both your voice and typed commands.
  • When Siri doesn’t know an answer, she’ll turn to a partnership with OpenAI’s ChatGPT.

Privacy, both from OpenAI and Apple, was a major concern. OpenAI won’t train on content from Apple users, the company said, while Apple also pledges it will never store requests made of its own AI, which it’s dubbed … Apple Intelligence.

Groan.

In an interview with the Washington Post, Apple head Tim Cook was most bullish on the technology’s ability to save people time by collapsing items that used to be multiple requests into one fluid action.

 

 

But he’s also realistic about the limitations of the technology. While he said the technology has been deeply tested, he couldn’t guarantee “100 percent” that their AI wouldn’t face the same hallucinations that have plagued other models. 

The markets certainly liked what they heard from Apple. The stock price jumped 7% by end of trading on the day of the announcement, reaching a new record high of $207.15.

But someone did come to rain on Apple’s parade: Elon Musk.  

“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies,” Musk wrote on X

Musk, who helped found OpenAI, has since turned against the company for allegedly abandoning its founding mission to chase profit. However, he did on Tuesday unexpectedly drop a lawsuit against OpenAI alleging just that. 

But even if Musk locks every iPhone that comes into Tesla HQ in a Faraday cage, the integration of AI into the foundation of that most ubiquitous of modern conveniences is likely to be a major jump forward for the average person. It may even change how we perceive AI and reframe its common understanding as a separate aspect of UX to an under-the-hood given. After all, it doesn’t feel like we’ve using a novel AI technology We’re simply talking to Siri, something we’ve been able to do for 14 years now. 

Apple is far from alone in putting AI deep into smartphones. Samsung was far ahead, in fact, rolling out a similar suite of features back in January. But Apple gets the most attention given its marketplace dominance. 

Elsewhere in Silicon Valley, Meta also wants its slice of the AI pie. The company is rolling out customer service chatbots in WhatsApp, hoping to gain revenue from businesses. This would be a boon to social media managers, but it’s unclear how much customers will love the sometimes frustrating experience. 

Meta is also facing backlash from visual artists as it uses imagery posted to Instagram to train its ravenous AI models. Some artists are now fleeing Instagram for a portfolio app known as Cara, which promises to protect their artwork But it’s hardly a perfect solution, as many artists rely on Instagram and its massive userbase to make sales. Expect user revolt against having their work deployed as AI training fodder to continue. 

And finally, consulting giant McKinsey shared its lessons from building its own in-house AI tool, Lilli. Their tips include assembling a multidisciplinary team, anchoring decisions in user needs, the importance of training and iteration, and ongoing measurement and maintenance. Learn how they made it happen, and perhaps be inspired to build your own custom tool. 

Risks and regulation

While the glut of questionable AI-generated content has caused headaches for Google Gemini and given us some pretty good laughs, it also highlights the tremendous risk that comes from publishing content without questioning and vetting it.

These heaps of sus AI content now have a name, The New York Times reports:s lop.

Naming this low-quality content is but one way to normalize its detection in our day-to-day lives, especially as some domestic policy experts worry the U.S. is downplaying the existential risks this tech creates. 

That concern drove the proposal of a framework this past April, drafted by a bipartisan group of lawmakers including Senators Mitt Romney, Jack Reed, Angus King and Jerry Moran, that seeks to codify federal oversight over AI models that will guard against chemical, biological, cyber and nuclear threats. It’s unclear how this framework fits into, or diverges, from the comprehensive AI task force update shared by The White House this past spring. 

Not content with the pace of advancing federal regulation, California advanced 30 new measures in May that amount to some of the toughest national restrictions on AI, the New York Times reports. These measures are focused on preventing AI tools from housing and healthcare services discrimination, the protection of IP and job stability.

“As California has seen with privacy, the federal government isn’t going to act, so we feel that it is critical that we step up in California and protect our own citizens,” said Rebecca Bauer-Kahan, a Democratic assembly member who chairs the State Assembly’s Privacy and Consumer Protection Committee.

This shouldn’t suggest that the feds don’t share California’s concerns, however. The FTC is currently investigating Microsoft’s partnership with AI startup Inflection as part of a larger effort to ramp up antitrust investigations and minimize the likelihood of one organization having a monopoly on enterprise AI software. 

Central to this probe is whether the partnership is actually an acquisition by another name that Microsoft failed to disclose, reports CNN. The FTC is currently finalizing details with the Justice Department on how they can jointly oversee the work of AI tech giants like Microsoft, Google, Nvidia, OpenAI and more.

According to CNN:

The agreement shows enforcers are poised for a broad crackdown on some of the most well-known players in the AI sector, said Sarah Myers West, managing director of the AI Now Institute and a former AI advisor to the FTC.

“Clearance processes like this are usually a key step before advancing an investigation,” West said. “This is a clear sign they’re moving quickly here.”

Microsoft declined to comment on the DOJ-FTC agreement but, in a statement, defended its partnership with Inflection.

“Our agreements with Inflection gave us the opportunity to recruit individuals at Inflection AI and build a team capable of accelerating Microsoft Copilot, while enabling Inflection to continue pursuing its independent business and ambition as an AI studio,” a Microsoft spokesperson said, adding that the company is “confident” it has complied with its reporting obligations.

But whether concerns are existential or logistical, it’s clear that fresh threats are coming fast.

Earlier this week, Human Rights Watch reported that photos and identifying information of Brazilian kids have been used without their consent to inform AI image tools like Stable Diffusion.

HRW warns that these photos contain personal metadata and can be used to train deepfakes.

HRW reports:

Analysis by Human Rights Watch found that LAION-5B, a data set used to train popular AI tools and built by scraping most of the internet, contains links to identifiable photos of Brazilian children. Some children’s names are listed in the accompanying caption or the URL where the image is stored. In many cases, their identities are easily traceable, including information on when and where the child was at the time their photo was taken.

One such photo features a 2-year-old girl, her lips parted in wonder as she touches the tiny fingers of her newborn sister. The caption and information embedded in the photo reveals not only both children’s names but also the name and precise location of the hospital in Santa Catarina where the baby was born nine years ago on a winter afternoon.

While the privacy of children is paramount,discussion of deepfakes also resurfaces concern about how digitally manipulated images and voices will continue to influence global elections this year.

But the emerging discipline of ‘responsible AI’ may mitigate the spread, as it includes designing tools that can detect deepfake audio and video similar to how a spam filter works. 

PwC, which is a member of Ragan Communications Leadership Council, is working on defining boundaries around responsible AI use, and developing tools that help communicators operate within those ethical frameworks. U.S. and Mexico Communications Lead Megan DiSciullo says this and similar efforts present an opportunity to train employees, inform end users and reduce risk.

“Whether it’s big sessions with thought leaders, teaching people how to prompt, curriculum on responsible AI or even just teaching people about what AI does and doesn’t do, a very important element remains the role of the human,” she told Ragan last month

The scaling of responsible AI tools will become more important, as a new study conducted by Epoch AI found that the availability of training data for AI models is close to running out. 

AP reports:

In the short term, tech companies like ChatGPT-maker OpenAI and Google are racing to secure and sometimes pay for high-quality data sources to train their AI large language models – for instance, by signing deals to tap into the steady flow of sentences coming out of Reddit forums and news media outlets.

In the longer term, there won’t be enough new blogs, news articles and social media commentary to sustain the current trajectory of AI development, putting pressure on companies to tap into sensitive data now considered private — such as emails or text messages — or relying on less-reliable “synthetic data” spit out by the chatbots themselves.

The depletion of existing data sources for language models and the increase in misinformation both make a strong case for having custom, proprietary GPTs that keep your data out of the slop pile. 

Ensuring your communications expertise and judgment is present during any internal AI council meetings, and that these risks are shared with leaders across your organization, will ensure your organization is positioned to embrace the foundations of responsible AI while validating the worth of your role and function.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-8/feed/ 0
Brown University explains end to Gaza campus encampment with empathy https://www.prdaily.com/brown-university-explains-end-to-gaza-campus-encampment-with-empathy/ https://www.prdaily.com/brown-university-explains-end-to-gaza-campus-encampment-with-empathy/#respond Fri, 03 May 2024 11:02:10 +0000 https://www.prdaily.com/?p=342939 A look at why Brown President Christina H. Paxson’s statement worked. Amid another week of campus protests and corporate activism around the Gaza war, it seemed like no organization handled its response properly. While Google CEO Sundar Pichai attempted to justify the firing of employee activists last week, police cleared an occupied building at Columbia […]

The post Brown University explains end to Gaza campus encampment with empathy appeared first on PR Daily.

]]>
A look at why Brown President Christina H. Paxson’s statement worked.

Amid another week of campus protests and corporate activism around the Gaza war, it seemed like no organization handled its response properly.

While Google CEO Sundar Pichai attempted to justify the firing of employee activists last week, police cleared an occupied building at Columbia University last night and arrested dozens of protestors. As stories of student encampments and ongoing protests continue to make headlines, communication from university leaders has been minimal. In a multifaceted conflict that engages stakeholders at an intersectional level, many leaders seem to take the route of saying less.

Then, just as it seemed that no institution offered a model response, Brown University announced on Tuesday evening that it reached an agreement with student leaders pushing for divestment. Details of the agreement were further contextualized in a letter by Brown President Christina H. Paxson.

Here’s what stuck out.

Leading with context

Paxson’s message to the Brown Community begins with a sober acknowledgment of what’s unfolding across the country.

“Many of us have watched with deep concern the tensions and divisions that have escalated across the country as colleges and universities have experienced intense confrontations at protests and encampments over the ongoing conflict in the Middle East,” she wrote before distinguishing Brown’s activism and announcing the news:

Brown has not experienced the heightened hostilities we have seen nationally, and I am writing to share that we’ll see a peaceful end to the unauthorized encampment that was set up April 24, 2024, on the College Green. After productive discussions between members of the Brown University administration and student leaders of the Brown Divest Coalition, we have reached an agreement that will end the encampment by 5 p.m. today.

In a moment where many institutional leaders are hesitant to comment at all, this acknowledgement doubles as recognition for anyone who has felt the emotional toll of the war and the protests. Paxson’s ability to contextualize the news and its response as different positions this decision, and her message, as an example to follow from the outset.

The most radical element is transparency

After announcing the agreement with the students who represent the Brown Divest Coalition, Paxson shares a public link to the full document and explains the broad terms:

[T]he students have agreed to remove the encampment and refrain from further actions that would violate the Code of Student Conduct through the end of this academic year, including through Commencement and Reunion Weekend.

The University has agreed that a group of five students will be invited to meet with a group of five members of the Corporation of Brown University while trustees and fellows are on campus for the May Corporation meeting. The meeting responds to the students’ interest to be heard on the issue of “divestment from the Israeli occupation of Palestinian Territory,” which was a core demand of their protest action. It is important to note that this topic will not be on the Corporation’s business agenda, and there will not be a vote on divestment at the May meeting.

Between linking to the full agreement and unpacking it in plain language, Paxson’s letter further demonstrates an unusual level of transparency from leadership. Rather than impose new guidelines or rules for handling the matter, she holds up the existing Code of Student Conduct and demonstrates a focus on protecting the community’s shared celebratory events.

Her explanation of the upcoming May meeting, and how it will work, simultaneously reinforces documented expectations while acknowledging that student concerns have been heard and will be addressed — another notable example of recognition from leadership.

Paxson then explains how any member of the community can request that Brown divest its endowment from specific companies, even sharing the process of submitting a proposal to the appropriate advisory committee. “I have committed to bring the matter of divestment to the Corporation, regardless of ACURM’s recommendation,” she wrote. “I feel strongly that a vote in October, either for or against divestment, will bring clarity to an issue that is of long-standing interest to many members of our community.”

This level of personal perspective and accountability is rare from leaders, demonstrating Paxson’s commitment to acknowledging and engaging all community perspectives. She continues this in the closing paragraphs.

 Closing on mission

The final sections of Paxson’s letter further bridge her personal hopes and perspectives on the encampment with Brown values:

I hope the meeting between the students and Corporation members will allow for a full and frank exchange of views. As I shared with the protesting students in my letter yesterday, the devastation and loss of life in the Middle East has prompted many to call for meaningful change, while also raising real issues about how best to accomplish this. Brown has always prided itself on resolving differences through dialog, debate and listening to each other.

I cannot condone the encampment, which was in violation of University policies. Also, I have been concerned about the escalation in inflammatory rhetoric that we have seen recently, and the increase in tensions at campuses across the country. I appreciate the sincere efforts on the part of our students to take steps to prevent further escalation.

During these challenging times, we continue to be guided by our mission of advancing knowledge and understanding in a spirit of free inquiry within a caring and compassionate community. We remain focused on four major priorities: (1) protecting the safety of our community; (2) fostering open and respectful learning environments; (3) providing care and empathy to affected members of our community; and (4) taking the strongest possible stance against any form of discrimination, harassment and racism against any race or ethnic group.

Even with this agreement, there remain many differences within our community about the Israeli-Palestinian conflict. These differences have been heightened in the months since October 7. And, I know that we will continue to have — and express — a broad range of conflicting beliefs and opinions about the situation in the Middle East, and the University’s response to it.

This stands out as a bold, empathetic example of executive comms a time when examples seem few and far in between. It’s also an acknowledgment of the fact that students’ rights to be heard and protest can co-exist within campus codes of conduct, committees for reviewing divestment, and other mechanisms put in place to protect civil, solution-oriented discourse.

Most importantly, the willingness of a leader to offer resources emphasizes Brown’s ultimate commitment to educate, inform and provide a path for progress to its community. That’s where the institution’s mission and actions align.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

The post Brown University explains end to Gaza campus encampment with empathy appeared first on PR Daily.

]]>
https://www.prdaily.com/brown-university-explains-end-to-gaza-campus-encampment-with-empathy/feed/ 0
AI for communicators: What’s new and what’s next https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-7/ https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-7/#respond Thu, 02 May 2024 09:00:01 +0000 https://www.prdaily.com/?p=342912 New risks and regulations lead the news. This week’s update is a tug-of-war between new technological advancements that bring stunning opportunities and regulation that seeks to give shape to this radical new technology and hamper bad actors from running amok with this power. Read on to find out what communicators need to be aware of […]

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
New risks and regulations lead the news.

This week’s update is a tug-of-war between new technological advancements that bring stunning opportunities and regulation that seeks to give shape to this radical new technology and hamper bad actors from running amok with this power.

Read on to find out what communicators need to be aware of this week in the chaotic, promising world of AI. 

Risks

As AI grows more sophisticated and powerful, it raises new risks that communicators never had to worry about before. This issue was exemplified by a bizarre case out of Maryland where a high school athletics director used AI to make it sound as though his principal was making racist and antisemitic remarks.

After damaging the principal’s reputation, the athletics director was arrested on a variety of charges. How this case plays out is certain to have legal ramifications, but the sheer ease with which a regular person was able to clone his boss’ voice to make him look bad should give all communicators pause. Be on the lookout for these devious deepfakes, and be prepared to push back. 

 

 

But artist FKA twigs is taking a unique approach to combating deepfakes by creating her own. In written testimony submitted to the U.S. Senate, she said that:

AI cannot replicate the depth of my life journey, yet those who control it hold the power to mimic the likeness of my art, to replicate it and falsely claim my identity and intellectual property. This prospect threatens to rewrite and unravel the fabric of my very existence. We must enact regulation now to safeguard our authenticity and protect against misappropriation of our inalienable rights.”

FKA Twigs says she intends to use her digital doppelganger to handle her social media presence and fan outreach while she focuses on her music. It’s a unique approach, and potentially one we’ll see more of in the future.

In other legal news, yet another lawsuit has been filed taking aim at what materials are used to train LLMs. 

Eight newspapers, including the Chicago Tribune and the Denver Post, are suing OpenAI and Microsoft, alleging that millions of their articles were used to train Microsoft Copilot and ChatGPT, the New York Times reported

Specifically, the suit complains that the bots offered up content that was only available behind their paywalls, thus relieving readers of the need to subscribe to gain access to specific knowledge and content. Similarly, a group of visual artists are suing Google on accusations that their artwork was used to train Google’s visual AI models. These cases will take years to resolve, but the outcomes could shape the future of AI.

We’re also now beginning to see some consumer backlash against the use of AI tools in areas where they really don’t want AI tools. Axios reports that Meta’s aggressive push to incorporate AI into the search bars of Facebook, Instagram and WhatsApp is leading to customer complaints. While Axios pointed out that historically, this is the pattern of new features launches on Meta apps – initial complaints followed by an embrace of the tool – AI fatigue is a trend to watch. 

That fatigue could also be playing out amid the second global AI summit, hosted by both Great Britain and South Korea, though it will largely play out virtually. Reuters reports that the summit is seeing less interest and lower projected attendance. 

Is the hype bubble bursting? 

Regulation

The White House announced a series of key AI regulatory actions, building on President Biden’s executive order from November with a detailed list of interdepartmental commitments and initiatives. 

While the initial executive order lacked concrete timelines and specifics on how the ambitious tasks would be fulfilled, this recent announcement begins by mapping its updates and explaining how progress was tethered to specific timeframes:

Today, federal agencies reported that they completed all of the 180-day actions in the E.O. on schedule, following their recent successes completing each 90-day, 120-day, and 150-day action on time. Agencies also progressed on other work tasked by the E.O. over longer timeframes.

Updates include:

  • Managing risks to safety and security. This effort directed agencies to acknowledge the safety and security risks of AI around infrastructure, biological warfare and software vulnerabilities. It included the development of a framework to prevent the possibility of using AI to engineer bioweapons, documents on generative AI risks that are available for public comment, safety and security guidelines for operators of critical infrastructure,  the launch of a safety and security board  to advise the secretary of Homeland Security, and the Department of Defense piloting of new AI tools to test for vulnerabilities in government software systems .
  • AI’s energy impact. Dubbed “Harnessing AI for good” in a delicate dance against accusations of wokeness”,  this portion of the update also shared details of how the government plans to advance AIfor scientific research and collaborate more with the private sector. They include announced funding opportunities led by the Department of Energy to support the development of energy-efficient algorithms and hardware. Meetings are on the books with clean energy developers, data center owners and operators alongside local regulators to determine how AI infrastructure can scale with clean energy in mind. There’s also an analysis of the risks that AI will pose to our nation’s power grid in the works. 

The update also featured progress on how the Biden administration is bringing AI talent into the federal government, which we’ll explore in the “AI at work” section below.

Overall, this update doubles as an example of how communicators can marry progress to a timeline to foster strategic, cross-departmental accountability. Those working in the software and energy sectors should also pay close attention to the commitments outlined above, and evaluate whether it makes sense for their organization to get involved in the private sector partnerships.

On the heels of this update, the Department of Commerce’s  National Institute of Standards and Technology released four draft publications aiming to improve the safety, security and trustworthiness of AI systems. These include an effort to develop advanced methods for determining what content is produced by humans and what is produced by AI.

“In the six months since President Biden enacted his historic Executive Order on AI, the Commerce Department has been working hard to research and develop the guidance needed to safely harness the potential of AI, while minimizing the risks associated with it,” said U.S. Secretary of Commerce Gina Raimondo. “The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time. With these resources and the previous work on AI from the department, we are continuing to support responsible innovation in AI and America’s technological leadership.”

While this progress on federal regulations should not be understated, TIME reported OpenSecrets data which reveals that 451 groups lobbied the federal government on artificial intelligence in 2023, nearly triple the 158 lobbying groups in 2022.

“And while these companies have publicly been supportive of AI regulation, in closed-door conversations with officials they tend to push for light-touch and voluntary rules, say Congressional staffers and advocates,” writes TIME. 

Whatever the intentions of these lobbyists are, it’ll be interesting to watch how their efforts fit in with the government’s initiatives and commitments. Public affairs leads should be mindful of how their efforts can be framed as a partnership with the government, which is offering ample touchpoints to engage with the private sector, or perceived as a challenge to national security under the guise of “innovation.” 

AI at work

The White House’s 180-day update also includes details about how the government will prepare the workforce to accelerate its AI applications and integrations. This includes a requirement of all government agencies to apply “developed bedrock principles and practices for employers and developers to build and deploy AI safely and in ways that empower workers.”

In this spirit, the Department of Labor published a guide for federal contractors to answer questions about legal obligations and equal employment opportunities. Whether your organization works with the government or not, this guide is a model to follow for any partner AI guidelines you may be asked to create. 

Other resources include guidance on how AI can violate employment discrimination laws, guidance on nondiscriminatory AI use in the housing sector and when administering public benefit programs. 

These updates include frameworks for testing AI in the healthcare sector. Healthcare communicators should pay particular attention to a rule “clarifying that nondiscrimination requirements in health programs and activities continue to apply to the use of AI, clinical algorithms, predictive analytics, and other tools. Specifically, the rule applies the nondiscrimination principles under Section 1557 of the Affordable Care Act to the use of patient care decision support tools in clinical care, and it requires those covered by the rule to take steps to identify and mitigate discrimination when they use AI and other forms of decision support tools for care.”

Beyond that, the White House also provided updates on its “AI Talent Surge” program.

Since President Biden signed the E.O., federal agencies have hired over 150 AI and AI-enabling professionals and, along with the tech talent programs, are on track to hire hundreds by Summer 2024,” the release reads.  “Individuals hired thus far are already working on critical AI missions, such as informing efforts to use AI for permitting, advising on AI investments across the federal government, and writing policy for the use of AI in government.”

Meanwhile in the private sector, Apple’s innovation plans are moving fast with The Financial Times reporting that the tech giant has poached dozens of Google’s AI experts to work at a secret lab in Zurich. 

All of this fast-moving behavior calls for a reminder that sometimes it’s best to slow down, especially as Wired reports that recruiters are overloaded with applications due to the flood of genAI tools making it easier for candidates to send applications en masse -– and harder for recruiters to sift through them all.

“To a job seeker and a recruiter, the AI is a little bit of a black box,” says Hilke Schellmann, whose book The Algorithm looks at software that automates résumé screening and human resources. “What exactly are the criteria of why people are suggested to a recruiter? We don’t know.”

As more recruiters go manual, it’s worth considering how your HR and people leaders evaluate candidates, balancing efficiencies in workflow with the human touch that can help identify a qualified candidate the algorithm may not catch. 

Ultimately the boundaries for responsible AI adoption at work will best be defined by those doing the work–not leadership– argues Verizon Consumer SVP and CEO Sowmyanarayan Sampath in HBR:

In developing applied technologies like AI, leaders must identify opportunities within workflows. In other words, to find a use for a new piece of tech, you need to understand how stuff gets done. Czars rarely figure that out, because they are sitting too far away from the supply line of information where the work happens.

There’s a better way: instead of decisions coming down the chain from above, leaders should let innovation happen on the frontline and support it with a center of excellence that supplies platforms, data engineering, and governance. Instead of hand-picking an expert leader, companies should give teams ownership of the process. Importantly, this structure lets you bring operational expertise to bear in applying technology to your business, responsibly and at scale and speed.

We couldn’t agree more.

Tools

For those who are interested in developing an AI tool but aren’t sure where to begin, Amazon Q might be the answer. The app will allow people to use natural language to build apps, no coding knowledge required. This could be a gamechanger to help democratize AI creation. Prices start at $20 per month.

From an end-user perspective, Yelp’s new Assistant AI tool says it will use natural language searches to help users find exactly what they’re looking for and then even draft messages to businesses. Yelp says these will help customers better communicate exactly what they’re looking for – a move that could save time for both customers and businesses. 

ChatGPT is widely rolling out a new feature that will allow chatbots to get to know you more deeply. Dubbed Memory, the ChatGPT Plus feature enables bots to remember more details about past conversations and to learn based on your interactions. This could cut down on time spent giving ChatGPT instructions about your life and preferences, but it could also come across as a bit invasive and creepy. ChatGPT does offer the ability to have the AI forget details, but expect more of this customization to come out in the future. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-7/feed/ 0
AI for communicators: What’s new and what matters https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-8/ https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-8/#respond Thu, 04 Apr 2024 09:00:25 +0000 https://www.prdaily.com/?p=342596 From regulation to new tools and beyond. AI continues to shape our world in ways big and small. From new rulings and protections for artists to tools that will help communicators and also aid bad actors, there’s no shortage of big stories. Here’s what communicators need to know.  AI risks and regulation In no surprise […]

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
From regulation to new tools and beyond.


AI continues to shape our world in ways big and small. From new rulings and protections for artists to tools that will help communicators and also aid bad actors, there’s no shortage of big stories.

Here’s what communicators need to know. 

AI risks and regulation

In no surprise to anyone following our updates, AI’s evolution is leading to more regulation to keep the fast-moving tech in check. 

Earlier this week, the estate of late comedian George Carlin settled with podcasters Will Sasso and Chad Kultgen over their comedy special, “George Carlin: I’m Glad I’m Dead,”which was made by training an AI algorithm on five decades of Carlin’s work and posted on YouTube. 

In addition to allegations of copyright infringement, the suit also claimed that the comedians used Carlin’s name and likeness without permission. 

The New York Times reports:

“The world has begun to appreciate the power and potential dangers inherent in A.I. tools, which can mimic voices, generate fake photographs and alter video,” [lawyer for Carlin’s estate Josh] Schiller said in a statement on Tuesday.

He added: “This is not a problem that will go away by itself. It must be confronted with swift, forceful action in the courts, and the A.I. software companies whose technology is being weaponized must also bear some measure of accountability.”

A spokeswoman for Mr. Sasso declined to comment. A spokesman for Mr. Kultgen could not immediately be reached.

Kelly Carlin, George Carlin’s daughter, wrote in a statement that she was pleased that the suit had been resolved so quickly.

“While it is a shame that this happened at all, I hope this case serves as a warning about the dangers posed by A.I. technologies and the need for appropriate safeguards,” Ms. Carlin said.

The 200 musicians who signed an open letter organized by the non-profit Artist Rights Alliance would also agree with Carlin.

The letter, which includes signatures from the likes of Katy Perry, J Balvin, Billie Eilish and Jon Bon Jovi, urges “AI developers, technology companies, platforms and digital music services to cease the use of artificial intelligence (AI) to infringe upon and devalue the rights of human artists.” While it acknowledges AI’s “enormous potential to advance human creativity,” it also claims that many companies use artists’ work irresponsibly to train models that dilute the artists’ royalty pools.

“Unchecked, AI will set in motion a race to the bottom that will degrade the value of our work and prevent us from being fairly compensated for it,” the letter continues.

Thankfully, the musicians performing on TV or movie scores and those onscreen are winning some regulatory protections. The American Federation of Musicians voted to ratify its new contract with major studios this week, providing streaming residuals and AI protections that codify the provisions secured after the Writers Guild of America and SAG-AFTRA strikes ended last year

According to Variety:

“This agreement is a monumental victory for musicians who have long been under-compensated for their work in the digital age,” said Tino Gagliardi, the union’s international president, in a statement.

On AI, the union got a stipulation that musicians are human beings. The agreement allows AI to be used to generate a musical performance, with payment to musicians whose work is used to prompt the AI system.

“AI will be another tool in the toolbox for the artistic vision of composers, and musicians will still be employed,” said Marc Sazer, vice president of AFM Local 47, in an interview. “They cannot produce a score without at least a human being.”

Treating AI as “another tool in the toolbox” is a great way to ensure that you’re preserving and maintaining human agency while automating certain tasks, and this ruling is a reminder that any collaborative policies you set with the creatives you work with (be they influencers, freelancers or full-timers) would do well to include context on how AI tools will or won’t be used to augment their work. 

Remember, communicators who get started now around  crafting internal use guidelines and governance around AI will be one step ahead when federal regulations are finally codified. 

This week, The U.S. Department of Commerce announced a partnership between the U.S. and U.K. AI Safety Institutes that will see them sharing research, safety evaluations and guidance on AI safety as they agree on processes for evaluation AI models, systems and agents.

This partnership will include at least one joint testing exercise on a publicly accessible model not named in the press release. 

“This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society,” said U.S. Secretary of Commerce Gina Raimondo.  “Our partnership makes clear that we aren’t running away from these concerns – we’re running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.” 

It’s unclear how this partnership connects back to past reports of global governmental collaboration on AI regulation, but the timing of the announcement — between the ratification of a landmark European AI law last month and its expected enforcement in May — is a smart accountability play by the US government, which has been moving more slowly than other countries on AI regulation and will see U.S. companies that operate in EU regions held accountable by some of the EU’s global standards.

While matters of safety and security are no doubt of high interest to your employees and external stakeholders alike, the larger push for regulation also ties back to which large language models (LLMs) are used most often – and raises questions about the dominance of the companies producing them.

Appearing with Jon Stewart on The Daily Show this past Monday, FTC Chair Lina Kahn touched on her push for anti-trust reform and took a shot at Apple after Stewart revealed Apple asked him not to have her on his podcast. 


“I think it just shows the danger of what happens when you concentrate so much power and so much decision-making in a small number of companies,” Khan said.

Keep that in mind as we look at Apple’s newest AI innovation. 

Tools and use cases 

Apple’s newest venture is ambitious, set to take on nothing less than industry leader ChatGPT. Reference Resolution As Language Modeling, or ReaLM, is anticipated to power Siri and other virtual assistants. What makes it stand out, according to Business Insider, is its superior ability to interpret context. 

For example, let’s say you ask Siri to show you a list of local pharmacies. Upon being presented with the list, you might ask it to “Call the one on Rainbow Road” or “Call the bottom one.” With ReaLM, instead of getting an error message asking for more information, Siri could decipher the context needed to follow through with such a task better than GPT-4 can, according to the Apple researchers who created the system.

ReaLM also excels in understanding images with embedded text, which can make it handy for pulling phone numbers or recipes out of uploaded images, BI reports. It’s all a big move for a tech player which has been widely seen as lagging the major industry players. Will it be enough? 

It’s clear the battle for users, especially in the business space, will be intense. Amazon is luring startup users to its cloud products by offering free credits for AI tools – even competitive products. But even though the credits can be used for other tools, Amazon made it clear to Reuters that its end goal is to build market share for its Bedrock platform, including Anthropic. 

“That’s part of the ecosystem building. We are unapologetic about that,” said Howard Wright, vice president and global head of startups at Amazon Web Services. Expect more brands to offer incentives as these wars really heat up.

Reuters also reports that Yahoo has made its own big investment in AI-powered news through its purchase of Artifact, created by the co-founders of Instagram. The news recommendation platform will help Yahoo serve more personalized content to visitors to its websites, a trend we’re certain to see more of in the future. AI’s capability to determine exactly what humans want and deliver it seamlessly represents a new era for content marketing.

Another major update for anyone who works with content is a new feature in OpenAI’s DALL-E image generator that allows users to edit pictures with conversational prompts. ZDNet reports that images can be edited either by using a  tool to identify the areas that should be altered and then typing a prompt, or simply by writing prompts, such as “make it black and white.” For those in the comms space who lack graphic design skills, this could be a major leap forward. But of course, it also comes with risks, like all AI tools at the moment. 

In a move that is all but certain to deepen problems with deepfakes even as it presents unique new opportunities for good actors, OpenAI has announced a tool that CNN says can recreate human voices with “startling accuracy.” Voice Engine needs just a 15-second recording of a person’s voice to convincingly mimic it. OpenAI says the tool can help with translation, reading assistance or speaking assistance for people who cannot talk – but it also recognizes the potential for misuse. 

“Any broad deployment of synthetic voice technology should be accompanied by voice authentication experiences that verify that the original speaker is knowingly adding their voice to the service and a no-go voice list that detects and prevents the creation of voices that are too similar to prominent figures,” OpenAI said in a statement shared with CNN. 

But others aren’t waiting for organizations like OpenAI to solve the misinformation problem – they’re acting now. 

University of Washington professor and founding chief executive of the Allen Institute for A.I Oren Etzioni spearheads an organization called TrueMedia.org which has released a free suite of tools for journalists, fact checkers and others (like you!) who are trying to parse truth from fiction amid the explosion of AI. The tools give confidence assertions about how likely an image or video is to be created by AI. It’s a helpful tool, but even Etzioni warns of its limitations.

We are trying to give people the best technical assessment of what is in front of them,” Etzioni said “They still need to decide if it is real.”

AI at work

One of the most common talking points for companies looking to invest in AI is its potential to streamline efficiency and productivity for mundane tasks. But several economic experts aren’t so sure, according to the New York Times:

But many economists and officials seem dubious that A.I. — especially generative A.I., which is still in its infancy — has spread enough to show up in productivity data already.

Jerome H. Powell, the Federal Reserve chair, recently suggested that A.I. “may” have the potential to increase productivity growth, “but probably not in the short run.” John C. Williams, president of the New York Fed, has made similar remarks, specifically citing the work of the Northwestern University economist Robert Gordon.

Mr. Gordon has argued that new technologies in recent years, while important, have probably not been transformative enough to give a lasting lift to productivity growth.

“The enthusiasm about large language models and ChatGPT has gone a bit overboard,” he said in an interview.

Of course, that’s not stopping large organizations from exploring productivity gains that the tech can bring. The story goes on to share details of how Walmart, Macy’s, Wendy’s and other brands are using AI internally across comms, marketing and logistics functions.

The piece notes that Walmart’s “My Assistant” section of its employee app uses AI to answer questions about benefits, summarize meetings and draft job descriptions:

The retailer has been clear that the tool is meant to boost productivity. In an interview last year, Donna Morris, Walmart’s chief people officer, said one of the goals was to eliminate some mundane work so employees could focus on tasks that had more impact. It’s expected to be a “huge productivity lift” for the company, she said.

This positioning of the tech as a means to eliminate mundane work tracks with how AI is often positioned as a partner, not a replacement — but this won’t be the case for employees working in jobs that involve physical labor. 

Re-Up, an AI-powered convenience store chain, announced its integration of Nala Robotics’ autonomous fry-cooking station, dubbed “The Wingman,” at several of its locations.

“The integration of robotics kitchens stands as a pivotal strategy in our modernization initiative, enabling us to enhance operational efficiency and deliver seamless services while upholding unwavering quality standards around the clock,”  Narendra Manney, co-founder & president of Re-Up said in the press release.

“The Wingman doesn’t get sick, can work around the clock and can cook any dish efficiently all the time, improving on quality and saving on labor costs,” said Ajay Sunkara, CEO of Nala Robotics. “At the same time, customers get to choose from an assortment of great-tasting food items just the way they like it.”

When communicating with employees, consider that they’re seeing these stories – and they’re worried. Work with senior leaders on the language they use to minimize how often they frame automation as business efficiencies without considering the people behind the labor savings. Fries may be just as tasty cooked by a robot, but the fear that such development may instill in your employees doing more menial tasks is still something to get out in front of. 

No wonder the Wall Street Journal reports that top M.B.A. programs at American University and Wharton are training students to think about how AI will automate tasks in their future careers:

American’s new AI classwork will include text mining, predictive analytics and using ChatGPT to prepare for negotiations, whether navigating workplace conflict or advocating for a promotion. New courses include one on AI in human-resource management and a new business and entertainment class focused on AI, a core issue of last year’s Hollywood writers strike. 

Officials and faculty at Columbia Business School and Duke University’s Fuqua School of Business say fluency in AI will be key to graduates’ success in the corporate world, allowing them to climb the ranks of management. Forty percent of prospective business-school students surveyed by the Graduate Management Admission Council said learning AI is essential to a graduate business degree—a jump from 29% in 2022. 

This integration of AI education at some of the country’s top business schools should serve as a call to communicators to explore the learning and development opportunities for employees who would benefit from upskilling on AI as part of their career trajectory.

Ultimately, this trend is another reminder that your work goes beyond crafting use cases, guidelines and best practices. Partnering with HR and people managers to see what AI training is available for the top talent in your industry, then positioning that training as a core tenet of your employer brand, will ensure your organization remains competitive and primed for the future. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-8/feed/ 0
Live from Ragan’s Social Media Conference: Optimizing your short-form video content https://www.prdaily.com/optimizing-short-form-video-content/ https://www.prdaily.com/optimizing-short-form-video-content/#respond Thu, 28 Mar 2024 11:01:04 +0000 https://www.prdaily.com/?p=342525 From establishing an ownable identity to finding trends to participate in, tips from Ragan’s Social Media Conference. Ragan and PR Daily’s 2024 Social Media Conference kicked off Wednesday with a memorable pre-conference workshop slate. During an afternoon session, Mackenzie Perna, co-founder at Sun & Sol Co., and Tyler Paget, social media director at Fox Racing, […]

The post Live from Ragan’s Social Media Conference: Optimizing your short-form video content appeared first on PR Daily.

]]>
From establishing an ownable identity to finding trends to participate in, tips from Ragan’s Social Media Conference.


Ragan and PR Daily’s 2024 Social Media Conference kicked off Wednesday with a memorable pre-conference workshop slate. During an afternoon session, Mackenzie Perna, co-founder at Sun & Sol Co., and Tyler Paget, social media director at Fox Racing, did a deep dive into how organizations can create impactful short-form content on TikTok, Reels and Shorts.

Here’s what we learned.

Establishing an ownable identity

Perna began by sharing how brands can create an ownable identity to show they’re truthful by embracing truth in various ways.

“We recommend that brands take on a very editorial approach and show their human community truth,” she said. “If we look around, we’re all diverse in ages and ethnicities, in what we like to do on the weekends. And consumers are looking at some of this from their brands as well. So when you’re programming your content showing diversity in your audience, and what you’re providing in regards to the entertainment and content, you’re sharing consumer truth. We all want to trust the brands that we’re purchasing from as well as the content we’re investing our time into.”

 

 

“The days of speaking through a one-way megaphone are over,” Perna added. “We really want to dig in.”

Lead with the hook

It’s important to remember that the hook is everything on TikTok — and action should come first. “Jump into a scene — don’t set it,” said Paget, emphasizing that this will encourage your audience to join you in the scene – and stay with you..

Some examples of effective hooks include:

  • “I don’t know who needs to hear this, but..”
  • “This is for you if [describe your target audience’s needs]”
  • “Stop scrolling if you [describe your target audience] who [desire or dislike]”
  • “Here’s 5 things to [desire]”
  • “You need to do this if you [desire]”
  • “Here’s a hack to [banish/attract]”
  • “Did you know that…”
  • “5 mistakes you are probably making…”

Optimizing your content

Creating a great video is only the beginning. When you’re putting the finishing touches on a piece of content, make sure you’re also considering:

  • Showcasing premium content by staying in safe zones. “There’s a lot of stuff going on on your screen, a lot of business with engagement icons … making sure your content is within that safe zone is very pertinent,” Paget said. Checking preview mode can help here.
  • Leveraging audio and voiceover to create an immersive expereince. Remember that TikTok started its life under the nameas Musical.ly, and music is still rooted in its DNA. Leveraging royalty- free, in-app music is the best way to go here.
  • Optimizing keywords. These should be optimized in your profile, text on screen, caption and copy. “Social media has really become a search engine,” said Perna.
  • Alt-text. Right before hitting post, you can find alt-text in your settings and add a description for what the content you’re sharing looks and feels like. This helps people with disabilities access and enjoy your content.

Telling a story and being a human

A 2024 Sprout Social Report found that most customers want to see more humans show up on brand social accounts.Front-line employees, social media teams, community employees and corporate leaders were most desired.

When spotlighting these employees, it’s crucial to not overthink it or try to be flawless. Accept that the aesthetics won’t be perfect and don’t stress having a strict storyline. Shakiness will happen, as will spelling mistakes and background noise. “If anything, it’ll cause chaos in the comment section, which is great for engagement,” Paget said with a smirk.

When to step in and when to stay out of trends

Just because a trend is happening on TikTok doesn’t mean you should be jumping in. Finding trends requires being an active participant. “Whether you’re on your lunch break or you’re having your morning coffee, just scroll through your brand’s account,” Perna said. “See what your community’s talking about. See what sounds and formats and conversations are happening in real time and allow this to be applied to your strategy on a daily basis. This allows your team to be quick and reactive.

When determining whether to jump into a trend,ask yourself:

  • Is it authentic? 90% of consumers say that authenticity matters when choosing brands to support, from retail to healthcare to higher ed and beyond.
  • Is it relevant? Citing Gary Vaynerchuk, Paget reminded us that good content is not about selling your content or services — it’s about understanding culture to decide when you can enter relevant conversaitons taking place online.
  • Is it memorable? Visual appeal, evoking emotions and leaning on the tenets of storytelling will help you get there.

“If you answer no to any of these questions, consider sitting out of that trend,” Perna said.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. He oversees the editorial strategy for Ragan across brands and products.

The post Live from Ragan’s Social Media Conference: Optimizing your short-form video content appeared first on PR Daily.

]]>
https://www.prdaily.com/optimizing-short-form-video-content/feed/ 0
AI for communicators: What’s new and what matters https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-7/ https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-7/#respond Thu, 14 Mar 2024 09:00:55 +0000 https://www.prdaily.com/?p=342341 From risks to regulation, what you need to know this week.  AI continues to shape our world in ways big and small. From misleading imagery to new attempts at regulation and big changes in how newsrooms use AI, there’s no shortage of big stories. Here’s what communicators need to know.  AI risks and regulation As […]

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
From risks to regulation, what you need to know this week. 


AI continues to shape our world in ways big and small. From misleading imagery to new attempts at regulation and big changes in how newsrooms use AI, there’s no shortage of big stories.

Here’s what communicators need to know. 


AI risks and regulation

As always, new and recurring risks continue to emerge around the implementation of AI. Hence, the push for global regulation continues.

Consumers overwhelmingly support federal AI regulation, too, according to a new survey from HarrisX. “Strong majorities of respondents believed the U.S. government should enact regulation requiring that AI-generated content be labeled as such,” reads the exclusive feature in Variety

But is the U.S. government best equipped to lead on regulation? On Wednesday, the European Parliament approved a landmark law that its announcement claims  “ensures safety and compliance with fundamental rights, while boosting innovation.” It is expected to take effect this May.

The law includes new rules banning applications that threaten citizen rights, such as biometric systems collecting sensitive data to create facial recognition databases (with some exceptions for law enforcement). It also requires clear obligations for high-risk AI systems that include “critical infrastructure, education and vocational training, employment, essential private and public services, certain systems in law enforcement, migration and border management,” and  “justice and democratic processes,” according to the EU Parliament.

The law will also require general-purpose AI systems and the models they are based on to meet transparency requirements in compliance with EU copyright law and publishing, which will include detailed summaries of the content used for training. Manipulated images, audio and video will need to be labeled.

CNBC reports:

Dragos Tudorache, a lawmaker who oversaw EU negotiations on the agreement, hailed the deal, but noted the biggest hurdle remains implementation.

“The AI Act has pushed the development of AI in a direction where humans are in control of the technology, and where the technology will help us leverage new discoveries for economic growth, societal progress, and to unlock human potential,” Tudorache said on social media on Tuesday.

“The AI Act is not the end of the journey, but, rather, the starting point for a new model of governance built around technology. We must now focus our political energy in turning it from the law in the books to the reality on the ground,” he added. 

Legal professionals described the act as a major milestone for international artificial intelligence regulation, noting it could pave the path for other countries to follow suit.

Last week, the bloc brought into force landmark competition legislation set to rein in U.S. giants. Under the Digital Markets Act, the EU can crack down on anti-competitive practices from major tech companies and force them to open out their services in sectors where their dominant position has stifled smaller players and choked freedom of choice for users. Six firms — U.S. titans Alphabet, Amazon, Apple, Meta, Microsoft and China’s ByteDance — have been put on notice as so-called gatekeepers.

Communicators should pay close attention to U.S. compliance with the law in the coming months, diplomats reportedly worked behind the scenes to water down the legislation.

“European Union negotiators fear giving in to U.S. demands would fundamentally weaken the initiative,” reported Politico.

“For the treaty to have an effect worldwide, countries ‘have to accept that other countries have different standards and we have to agree on a common shared baseline — not just European but global,’” said  Thomas Schneider, the Swiss chairman of the committee.

If this global regulation dance sounds familiar, that’s because something similar happened when the EU adopted the General Data Protection Regulation (GDPR) in 2016, an unprecedented consumer privacy law that required cooperation from any company operating in a European market. That law influenced the creation of the California Consumer Privacy Act two years later. 

As we saw last week when the SEC approved new rules for emissions reporting, the U.S. can water down regulations below a global standard. It doesn’t mean, however, that communicators with global stakeholders aren’t beholden to global laws.

Expect more developments on this landmark regulation in the coming weeks.

As news of regulation dominates, we are reminded that risk still abounds. While AI chip manufacturer NVIDIA rides all-time market highs and earned coverage for its competitive employer brand, the company also finds itself in the crosshairs of a proposed class action copyright infringement lawsuit just like OpenAI did nearly a year ago. 

Authors Brian Keene, Abdi Nazemian and Steward O’Nan allege that their works were part of a datasite NVIDIA used to train its NeMo AI platform. 

QZ reports:

Part of the collection of works NeMo was trained on included a dataset of books from Bibliotik, a so-called “shadow library” that hosts and distributes unlicensed copyrighted material. That dataset was available until October 2023, when it was listed as defunct and “no longer accessible due to reported copyright infringement.”

The authors claim that the takedown is essentially Nvidia’s concession that it trained its NeMo models on the dataset, thereby infringing on their copyrights. They are seeking unspecified damages for people in the U.S. whose copyrighted works have been used to train Nemo’s large language models within the past three years.

“We respect the rights of all content creators and believe we created NeMo in full compliance with copyright law,” a Nvidia spokesperson said.

While this lawsuit is a timely reminder that course corrections can be framed as an admission of guilt in the larger public narrative,  the stakes are even higher.

A new report from Gladstone AI, commissioned by the State Department, consulted experts at several AI labs including OpenAI, Google DeepMind and Meta offers substantial recommendations for the national security risks posed by the technology. Chief among its concerns is what’s characterized as a “lax approach to safety” in the interest of not slowing down progress,  cybersecurity concerns and more.

Time reports:

The finished document, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power. The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI’s GPT-4 and Google’s Gemini. The new AI agency should require AI companies on the “frontier” of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward “alignment” research that seeks to make advanced AI safer, it recommends.

On the ground level, Microsoft stepped up in blocking terms that generated violent, sexual imagery using Copilot after an engineer expressed their concerns to the FTC.

According to CNBC:

Prompts such as “pro choice,” “pro choce” [sic] and “four twenty,” which were each mentioned in CNBC’s investigation Wednesday, are now blocked, as well as the term “pro life.” There is also a warning about multiple policy violations leading to suspension from the tool, which CNBC had not encountered before Friday.

“This prompt has been blocked,” the Copilot warning alert states. “Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access. If you think this is a mistake, please report it to help us improve.”

This development is a reminder that AI platforms will increasingly put the onus on end users to follow evolving guidelines when we publish automated content. Whether you work within the capabilities of consumer-optimized GenAI tools or run your own, custom GPT, sweeping regulations to the AI industry are not a question of “if” but “when”.

Tools and use cases 

Walmart is seeking to cash in on the AI craze with pretty decent results, CNBC reports. Its current experiments surround becoming a one-stop destination for event planning. Rather than going to Walmart.com and typing in “paper cups,” “paper plates,” “fruit platter” and so on, the AI will generate a full list based on your needs – and of course, allow you to purchase it from Walmart. Some experts say this could be a threat to Google’s dominance, while others won’t go quite that far, but are still optimistic about its potential. Either way, it’s something for other retailers to watch.

Apple has been lagging other major tech players in the AI space. Its current biggest project is a laptop that touts its power for other AI applications, rather than developing its own. But FastCompany says that could change this summer when Apple rolls out its next operating systems, which are all but certain to include their own AI. 

FastCompany speculates that a project internally dubbed “AppleGPT” could revolutionize how voice assistant Siri works. It also may include an AI that lives on your device rather than in the cloud, which would be a major departure from other services. They’ll certainly make a splash if they can pull it off.

Meanwhile, Google’s Gemini rollout has been anything but smooth. Recently the company restricted queries related to upcoming global elections, The Guardian reported

A statement from Google’s India team reads: “Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses.” The Guardian says that even basic questions like “Who is Donald Trump?” or asking about when to vote give answers that point users back to Google searches. It’s another black eye for the Gemini rollout, which consistently mishandles controversial questions or simply sends people back to familiar, safe technology.

But then, venturing into the unknown has big risks. Nature reports that AI is already being used in a variety of research applications, including generating images to illustrate scientific papers. The problems arise when close oversight isn’t applied, as in the case of a truly bizarre image of rat genitalia with garbled, nonsense text overlaid on it. Worst of all, this was peer reviewed and published. It’s yet another reminder that these tools cannot be trusted on their own. They need close oversight to avoid big embarrassment. 

AI is also threatening another field, completely divorced from scientific research: YouTube creators. Business Insider notes that there is an exodus of YouTubers from the platform this year. Their reasons are varied: Some face backlash, some are seeing declining views and others are focusing on other areas, like stand-up comedy. But Business Insider says that AI-generated content swamping the video platform is at least partly to blame:


Experts believe if the trend continues, it may usher in a future where relatable and authentic friends people used to turn to the platform to watch are fewer and far between. Instead, replaced by a mixture of exceedingly high-end videos only the MrBeasts of the internet can reach and sub-par AI junk thrown together by bots and designed to meet our consumption habits with the least effort possible.

That sounds like a bleak future indeed – and one that can also change the available influencers available to partner on the platform.

But we are beginning to see some backlash against AI use, especially in creative fields. At SXSW, two filmmakers behind “Everything Everywhere All at Once” decried the technology. Daniel Scheinert warned against AI, saying: “And if someone tells you, there’s no side effect. (AI’s) totally great, ‘get on board’ — I just want to go on the record and say that’s terrifying bullshit. That’s not true. And we should be talking really deeply about how to carefully, carefully deploy this stuff.”

Thinking carefully about responsible AI use is something we can all get behind. 

AI at work

As the aforementioned tools promise new innovations that will shape the future of work, businesses continue to adjust their strategies in kind.

Thompson-Reuters CEO Steve Hasker told the Financial Times that the company has “tremendous financial firepower” to expand the business into AI-driven professional services and information ahead of selling the remainder of its holding to the London Stock Exchange Group (LSEG).

“We have dry powder of around $8 billion as a result of the cash-generative ability of our existing business, a very lightly levered balance sheet and the sell down of [our stake in] LSEG,” said Hasker. 

Thompson-Reuters has been on a two-year reorg journey to shift its services as a content provider into a “content-driven” tech company. It’s a timely reminder that now is the time to consider how AI fits not only into your internal use cases, but your business model. Testing tech and custom GPTs as “customer zero” internally can train your workforce and prepare a potentially exciting new product for market in one fell swoop. 

A recent WSJ feature goes into the cost-saving implications of using GenAI to integrate new corporate software systems, highlighting concerns that the contractors hired to implement these systems will see bottom-line savings through automation while charging companies the same rate. 

WSJ reports:

How generative AI efficiencies will affect pricing will continue to be hotly debated, said Bret Greenstein, data and AI leader at consulting firm PricewaterhouseCoopers. It could increase the cost, since projects done with AI are higher quality and faster to deliver. Or it could lead to lower costs as AI-enabled integrators compete to offer customers a better price.

Jim Fowler, chief technology officer at insurance and financial services company Nationwide, said the company is leaning on its own developers, who are now using GitHub Copilot, for more specialized tasks. The company’s contractor count is down 20% since mid-2023, in part because its own developers can now be more productive. Fowler said he is also finding that contractors are now more willing to negotiate on price.

Remember, profits and productivity are not necessarily one in the same. Fresh Axios research found workers in Western countries are embracing AI’s potential for productivity less than others – only 17 % of U.S. respondents and 20% of EU said that AI improved productivity. That’s a huge gap from the countries reporting higher productivity, including 67% of Indian respondents, 65% in Indonesia and 62% in the UAE.

Keeping up and staying productive will also require staying competitive in the global marketplace. No wonder the war for AI talent rages on in Europe.

“Riding the investment wave, a crop of foreign AI firms – including Canada’s Cohere and U.S.-based Anthropic and OpenAI – opened offices in Europe last year, adding to pressure on tech companies already trying to attract and retain talent in the region,” Reuters reported

AI is also creating new job opportunities. Adweek says that marketing roles involving AI are exploding, from the C-suite on down. Among other new uses:

Gen AI entails a new layer of complexity for brands, prompting people within both brands and agencies to grasp the benefits of technology, such as Sora, while assessing its risks and ethical implications.

Navigating this balance could give rise to various new roles within the next year, including ethicists, conversational marketing specialists with expertise in sophisticated chatbots, and data-informed strategists on the brand side, according to Jason Snyder, CTO of IPG agency Momentum Worldwide.

Additionally, Snyder anticipates the emergence of an agency integration specialist role within brands at the corporate level.

“If you’re running a big brand marketing program, you need someone who’s responsible for integrating AI into all aspects of the marketing program,” said Snyder. “[Now] I see this role in in bits and pieces all over the place. [Eventually], whoever owns the budget for the work that’s being done will be closely aligned with that agency integration specialist.”

As companies like DeepMind offer incentives such as restricted stock, domestic startups will continue to struggle with hiring top talent if their AI tech stack isn’t up to the standard of big players like NVIDIA.

“People don’t want to leave because when you don’t have anything when they have peers to work with, and when they already have a great experimentation stack and existing models to bootstrap from, for somebody to leave, it’s a lot of work,” Aravind Srinivas, the founder and CEO of Perplexity, told Business Insider, 

“You have to offer such amazing incentives and immediate availability of compute. And we’re not talking of small compute clusters here.”

Another reminder that building a competitive, attractive employer brand around your organization’s AI integrations should be on every communicator’s mind. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-7/feed/ 0
AI for communicators: What’s new and what’s next https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-5/ https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-5/#respond Thu, 15 Feb 2024 10:00:39 +0000 https://www.prdaily.com/?p=341954 Deepfakes resurrect dead political leaders and how AI impacts layoffs.  Ai continues hurtling forward, bringing with it new promise and new peril. From threats to the world’s elections to hope for new kinds of jobs, let’s see how this technology is impacting the role of communicators this week. Risks 2024 is likely the biggest election […]

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
Deepfakes resurrect dead political leaders and how AI impacts layoffs. 

Ai continues hurtling forward, bringing with it new promise and new peril. From threats to the world’s elections to hope for new kinds of jobs, let’s see how this technology is impacting the role of communicators this week.

Risks

2024 is likely the biggest election year in the history of the world. Nearly half the planet’s inhabitants will head to the polls this year, a major milestone. But that massive wave of humanity casting ballots comes at the precise moment that AI deepfakes are altering the information landscape, likely forever.

In both India and Indonesia, AI is digitally resurrecting long-dead politicians to weigh in on current elections. A likeness of M Karunanidhi (date of death: 2018), former leader of India’s Dravida Munnetra Kazhagam (DMK) party, delivered an 8-minute speech endorsing current party leaders. Indonesian general, president and strongman Suharto (date of death: 2008) appeared in a social media video touting the benefits of the Golkar party.

Neither video is intended to fool anyone into thinking these men are still alive. Rather, they’re using the cache and popularity of these deceased leaders to drum up votes for the elections of today. While these deepfakes may not be overtly deceptive, they’re still putting words these men never spoke into their virtual mouths. It’s an unsettling prospect and one that could pay big dividends in elections. There’s no data to know how successful the strategy might be – but we’ll have it soon, for better or worse.

 

 

Major tech companies, including Google, Microsoft, Meta, OpenAI, Adobe and TikTok all intend to sign an “accord” that would hopefully help identify and label AI deepfake amid these vital elections, the Washington Post reported. It stops short of banning such content, however, merely committing to more transparency around what’s real and what’s AI.

“The intentional and undisclosed generation and distribution of deceptive AI election content can deceive the public in ways that jeopardize the integrity of electoral processes,” the accord says.

But while the intentions may be good, the technology isn’t there yet. Meta has committed to labeling AI imagery created with any generative tool, not just its own, but they’re still developing the tools. Will transparency catch up in time to act as a safeguard to this year’s many elections? 

Indeed, OpenAI CEO Sam Altman admits that it’s not the threat of artificial intelligence spawning killer robots that keep him up at night – it’s how everyday people might use these tools. 

“I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong,” Altman said during a video call at the World Governments Summit.

One example could be this technology for tracking employee’s Slack messages. More than 3 million employees at some of the world’s biggest companies are already being observed by Aware AI software, designed to track internal sentiment and preserve chats for legal reasons, Business Insider reported. It can also track other problematic behaviors, such as bullying or sexual harassment.

The CEO of Aware says its tools aren’t intended to be used for decision-making or disciplinary purposes. Unsurprisingly, this promise is being met with skepticism by privacy experts.

“No company is essentially in a position to make any sweeping assurances about the privacy and security of LLMs and these kinds of systems,” said Amba Kak, executive director of the AI Now Institute at New York University.

That’s where we are right now: a state of good intentions for using  technology that is powerful enough to be dangerous, but not powerful enough to be fully trusted. 

Regulation, ethics and government oversight

The push for global AI regulation shows no signs of slowing, with notable developments including a Vatican friar leading an AI commission alongside Bill Gates and Italian Prime Minister Giorgia Melonin to curb the influence of ChatGPT in Italian media, and NVIDIA CEO  Jensen Huang calling for each country to cultivate its own sovereign AI strategy and own the data it produces. 

“It codifies your culture, your society’s intelligence, your common sense, your history – you own your own data,” Huang told UAE’s Minister of AI Omar Al Olama earlier this week at the World Governments Summit in Dubai.

In the U.S., federal AI regulation took several steps forward last month when the White House followed up on its executive order announced last November with an update on key, coordinated actions being taken at the federal level. Since then, other federal agencies have followed suit, issuing new rules and precedents that promise to directly impact the communications field.

Last week, the Federal Communications Commission (FCC) officially banned AI-generated robocalls to curb concerns about election disinformation and voter fraud. 

According to the New York Times:

“It seems like something from the far-off future, but it is already here,” the F.C.C. chairwoman, Jessica Rosenworcel, said in a statement. “Bad actors are using A.I.-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters.”

Those concerns came to a head late last month, when thousands of voters received an unsolicited robocall from a faked voice of President Biden, instructing voters to abstain from voting in the first primary of the election season. The state attorney general office announced this week that it had opened a criminal investigation into a Texas-based company it believes is behind the robocall. The caller ID was falsified to make it seem as if the calls were coming from the former New Hampshire chairwoman of the Democratic Party.

This is a vital area for communicators to monitor, and to clearly and proactively send messages on how to spot scams and identify real calls and emails from your organization from the fake. Don’t wait until you’re being spoofed – communicate now. 

Closer to the communicator’s purview is another precedent expressed in recently published guidelines by the U.S. Patent and Trademark Office that states it will only grant its official legal protections to humans, citing Biden’s aforementioned Executive Order in claiming that “patents function to incentivize and reward human ingenuity.”

The guidance clarifies that, though inventions made using AI are not “categorically unpatentable,” the AI used to make them cannot be classified as the inventor from a legal standpoint. This requires at least one human to be named as the inventor for any given claim – opening their claim to ownership up for potential review if they have not created a significant portion of the work.

Organizations that want to copyright or patent work using GenAI would do well to codify their standards and documentation for explaining exactly how much of the work was created by humans. 

That may be why the PR Council recently updated its AI guidelines  “to include an overview of the current state of AI, common use cases across agencies and guidance on disclosure to clients, employee training and more.” 

The Council added that it created a cross-disciplinary team of experts in ethics, corporate reputation, digital, and DE&I to update the guidelines.

 The updates state:

  • A continuum has emerged that delineates phases in AI’s evolution within firms highlights its implications for serving clients, supporting teams and advancing the public interest. 
  • While AI use cases, especially among Creative teams, has expanded greatly, the outputs are not final, client-ready work due to copyright and trademark issues and the acknowledgment that human creativity is essential for producing unique, on-strategy outputs. 
  • With AI being integrated into many existing tools and platforms, agency professionals should stay informed about new capabilities, challenges and biases. 
  • Establishing clear policies regarding the use of generative AI, including transparency requirements, is an increasing need for agencies and clients. This applies to all vendors, including influencer or creator relationships. 
  • Despite predictions that large language models will eliminate hallucinations within 18 months, proper sourcing and fact-checking remain crucial skills. 
  • Experts continue to advise caution when inputting confidential client information, due to mistrust of promised security and confidentiality measures.  
  • Given the persistent risk of bias, adhering to a checklist to identify and mitigate bias is critical. 

These recommendations function as a hyperlocal safeguard for risk and reputation that communicators can own and operationalize throughout the organization. 

Tools and Innovations

AI’s evolution continues to hurtle ahead at lightning speed. We’re even getting rebrands and name changes, as Google’s old-fashioned sounding Bard becomes the more sci-fi Gemini. The new name comes with a new mobile app to enable to AI on the go, along with Gemini Advanced, a $19.99/month service that uses Google’s “Ultra 1.0 model,” which the company says is more adept at complex, creative and collaborative tasks.

MIT researchers are also making progress on an odd issue with chatbots: their tendency to crash if you talk to them for too long. You can read the MIT article for the technical details, but here’s the bottom line for end users: “This could allow a chatbot to conduct long conversations throughout the workday without needing to be continually rebooted, enabling efficient AI assistants for tasks like copywriting, editing, or generating code.”

Microsoft, one of the leading companies in the AI arms race, has released three major trends it foresees for the year ahead. This likely adheres to its own release plans, but nonetheless, keep an eye on these developments over the next year:

  • Small language models: The name is a bit misleading – these are still huge models with billions of data points. But they’re more compact than the more famous large language models, often able to be stored on a mobile phone, and feature a curated data set for specific tasks. 
  • Multimodal AI: These models can understand inputs via text, video, images and audio, offering more options for the humans seeking help.
  • AI in science: While many of us in comms use AI to generate text, conduct research or create images, scientists are using it to improve agriculture, fight cancer and save the environment. Microsoft predicts big improvements in this area moving forward. 

AI had a presence at this year’s Super Bowl, though not as pronounced as, say, crypto was in 2022. Still, Microsoft’s Copilot product got an ad, as did some of Google’s AI features, Adweek reported. AI also featured in non-tech brands like Avocados from Mexico (GuacAImole will help create guac recipes) and as a way to help Etsy shoppers find gifts.

But AI isn’t just being used as a marketing tool, it’s also being used to deliver ads to viewers. “Disney’s Magic Words” is a new spin on metadata. Advertisers on Disney+ or Hulu can tie their advertising not just to specific programs, but to specific scenes, Reuters reported. This will allow brands to tailor their ads to fit the mood or vibe of a precise moment. No more cutting away from an intense, dramatic scene to a silly, high-energy ad. This could help increase positive brand sentiment by more seamlessly integrating emotion into programmatic ad choices.

AI at work 

The question of whether or not AI will take away jobs has loomed large since ChatGPT came on the scene in late 2022. While there’s no shortage of studies, facts and figures analyzing this trend, recent reports suggest that the answer depends on where you sit in an organization.

A recent report in the Wall Street Journal points to recent layoffs at companies like Google, Duolingo and UPS as examples where roles were eliminated in favor of productivity automation strategies, and suggests that managers may find themselves particularly vulnerable.

The report reads:

“This wave [of technology] is a potential replacement or an enhancement for lots of critical-thinking, white-collar jobs,” said Andy Challenger, senior vice president of outplacement firm Challenger, Gray & Christmas.

Since last May, companies have attributed more than 4,600 job cuts to AI, particularly in media and tech, according to Challenger’s count. The firm estimates the full tally of AI-related job cuts is likely higher, since many companies haven’t explicitly linked cuts to AI adoption in layoff announcements.

Meanwhile, the number of professionals who now use generative AI in their daily work lives has surged. A majority of more than 15,000 workers in fields ranging from financial services to marketing analytics and professional services said they were using the technology at least once a week in late 2023, a sharp jump from May, according to Oliver Wyman Forum, the research arm of management-consulting group Oliver Wyman, which conducted the survey.

It’s not all doom and gloom, however. “Job postings on LinkedIn that mention either AI or generative AI more than doubled worldwide between July 2021 and July 2023 — and on Upwork, AI job posts increased more than 1,000% in the second quarter of 2023, compared to the same period last year,” reports CNBC. 

Of course, as companies are still in an early and experimental phase with integrating AI into workflows, the jobs centered around them carry a high level of risk and uncertainty. 

That may be why efforts are afoot to educate those who want to work in this emerging field.

Earlier this week, Reuters reported that Google pledged €25 million to help Europeans learn how to work with AI. Google accompanied the announcement by opening applications for social organizations and nonprofits to help reach those who would benefit most from the training. The company also expanded its online AI training courses to include 18 languages and announced “growth academies” that it claims will help companies using AI scale their business.

“Research shows that the benefits of AI could exacerbate existing inequalities — especially in terms of economic security and employment,” Adrian Brown, executive director of the Centre for Public Impact nonprofit collaborating with Google on the initiative, told Reuters. 

“This new program will help people across Europe develop their knowledge, skills and confidence around AI, ensuring that no one is left behind.”

While it’s unclear what industries or age demographics this initiative will target, one thing’s certain: the next generation workforce is eager to embrace AI.

A 2024 trends rport from Handshake, a career website for college students, found that 64% of tech majors and 45% of non-tech majors graduating in 2024 plan to develop new skills that will allow them to use gen AI in their careers.

Notably, students who are worried about the impact of generative AI on their careers are even more likely to plan on upskilling to adapt,” the report found.

These numbers suggest that there’s no use wasting time to fold AI education into your organization’s learning and development offerings. The best way to ease obsolescence concerns among your workforce is to integrate training into their career goals and development plans, standardize that training across all relevant functions and skill sets, then make it a core part of your employer brand.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

The post AI for communicators: What’s new and what’s next appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-whats-next-5/feed/ 0