AI and Automation Archives - PR Daily https://www.prdaily.com/category/ai-and-automation/ PR Daily - News for PR professionals Tue, 26 Nov 2024 21:08:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 One Big AI Idea: Make AI your style editor https://www.prdaily.com/one-big-ai-idea-make-ai-your-style-editor/ https://www.prdaily.com/one-big-ai-idea-make-ai-your-style-editor/#respond Wed, 27 Nov 2024 11:00:14 +0000 https://www.prdaily.com/?p=345278 Getting an entire department – or company! – to use a particular style consistently can be like herding cats. Someone insists on using the Oxford comma no matter how many times you tell them not to. Brad overuses hyphens like they’re going out of style. And Janet, for some reason, insists on using British spellings.. […]

The post One Big AI Idea: Make AI your style editor appeared first on PR Daily.

]]>
Getting an entire department – or company! – to use a particular style consistently can be like herding cats.

Someone insists on using the Oxford comma no matter how many times you tell them not to. Brad overuses hyphens like they’re going out of style. And Janet, for some reason, insists on using British spellings..

AI can help.

If you’re using a set style, such as AP  or MLA, you can simply ask your AI of choice to edit according to that style guide. Or if you have an in-house document that guides your style use, you have two options.

For a quick and dirty method, you can simply upload or copy and paste the style document, then upload the document you want to edit and  check against that style. You can simply repeat this process every time.

 

 

Or you can create a custom GPT that only requires you to upload the style document(s) once. You can then share it throughout the organization. This requires marginally more setup on the front end but can pay off in time savings down the road. You will need a paid ChatGPT account to create a custom GPT.

When editing, ask the AI to highlight the changes it’s making so you can doublecheck its work. Never trust AI without verifying – in some instances, it may swear up and down it’s removed all the Oxford commas while it hasn’t removed a single one.

Trust but verify, always.

Allison Carter is editor-in-chief of PR Daily. Follow her on Bluesky or LinkedIn.

The post One Big AI Idea: Make AI your style editor appeared first on PR Daily.

]]>
https://www.prdaily.com/one-big-ai-idea-make-ai-your-style-editor/feed/ 0
AI may require PR agencies to reevaluate billing models https://www.prdaily.com/ai-may-require-pr-agencies-to-reevaluate-billing-models/ https://www.prdaily.com/ai-may-require-pr-agencies-to-reevaluate-billing-models/#respond Thu, 21 Nov 2024 12:00:03 +0000 https://www.prdaily.com/?p=345222 Tools like ChatGPT are completing tasks that used to take hours in minutes so firms may want to consider how that’ll affect hourly billing. Michelle Olson remembers the early days of her PR career, when researching a complex crisis communications plan would take hours. It was worth it to the client, she said, which is […]

The post AI may require PR agencies to reevaluate billing models appeared first on PR Daily.

]]>
Tools like ChatGPT are completing tasks that used to take hours in minutes so firms may want to consider how that’ll affect hourly billing.

Michelle Olson remembers the early days of her PR career, when researching a complex crisis communications plan would take hours. It was worth it to the client, she said, which is why they were OK with having it all billed back to them.

Today, however, thanks to AI, that initial research can happen in just a few seconds. While Olson or her teammates at Lambert by LLYC still need to fact-check for accuracy, they can complete many tasks faster than they did even two years ago, thanks to ChatGPT, she said.

 

 

None of her clients are asking for adjustments to their rates right now, Olson said. But given the rapid advancements in these technologies, she feels now is the time for firms to start thinking about not only how they’re billing clients but also how they’re proving their value to them.

“It’s really about viewing this opportunity to highlight that we’re much more than just ‘doers’ of tasks,” said Olson, Lambert’s chief client officer and a fellow at PRSA.

A discussion 20 years in the making

Olson’s firm still bills “pure,” or the actual time it takes to complete a work item. “If we have a retainer-type arrangement with our clients, we still (build) an hourly rate into that retainer.”

But she and PRophet founder and CEO Aaron Kwittken both said there have been conversations for more than 20 years about finding models to replace billable hours as the dominant method.

“I think the reason for the use of billable hours is that we’ve either been scared of or can’t define what success looks like,” said Kwittken, a “recovered agency guy” who pivoted to comms tech in 2022.

Olson noted that a much-discussed concept has been the value billing model based on the impact of the work rather than the time it takes to complete.

For instance, a quality pitch to the Wall Street Journal may only take a five-minute phone call – or about $5 on a $100 per hour rate – but the value of that placement could be “priceless” to a brand, Olson said.

It’s common for agencies to build in a certain number of hours per month, but Olson noted that this approach has flaws. “Our services aren’t utilized that way,” she explained.

“We’re crisis communicators. We’re issue managers. There’s something that happens every month that we don’t count on, that a communicator needs to help with,” she continued. “The hours are going to ebb and flow.”

How to readjust retainers

On the client side, teams want as much value as possible, Kwittken said. As such it’s about ways to rethink the billing process to highlight the works that’s taking place beyond press releases and website copy.

“(Clients) want to fix their costs and don’t want them to creep because they have a budget,” he added. “They want to pay for performance, not just activity reports. They want to know what we did to help them achieve their goals, like sales or shareholder valuation.”

The emergence of this tech may give PR agencies a chance to “productize,” not commoditize, what they do and assign specific costs or values to each task, service or deliverable against objective success goals, Kwittken said. He gave the example of tying PR’s impact on sales, employee morale, shareholder value, etc. directly into their client’s CRM.

To that end, Olson sees the potential for PR agencies to go back to the negotiating table and really drive home what they bring to the table in terms.

Olson’s hope is that while they may bill fewer hours for a particular project, AI is creating more time “to be in our clients’ heads about what they worry about every day.” That means there’s more time to do the analysis of social media audiences or strategize about campaigns.

“With those extra two, three hours we can figure out how to make a bigger impact for the client, so that the client benefits from us,” she said. “Maybe that’s another brainstorming session about an issue that they hadn’t told us about yet, because we’re not scoped for that.”

As part of the process, Olson suggested asking clients things such as what’s keeping them up at night and how they can help.

“We have a chance to become bigger strategic partners as an agency,” she said.

Finding the right solution for your firm

Olson noted that there’s no true challenger to the billable hour system. In fact she’s known only three agencies that have gone to the value billing model.

Two of them don’t even exist anymore.

That doesn’t mean value billing or another system won’t work, she said. It also doesn’t mean teams should avoid AI for the sake of taking longer to complete a job.

In fact, it’s just the opposite, Olson said. She believes the new data and insights that AI can provide will improve strategy and measure performance.

Firms need to evaluate their business operations and find ways where they can improve their high-level offerings. Doing so, Olson believes, will lead teams to hire more strategists, writers and data analysts.

“This is our moment to take the lead,” Olson said.

Casey Weldon is a reporter for PR Daily. Follow him on LinkedIn.

 

 

The post AI may require PR agencies to reevaluate billing models appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-may-require-pr-agencies-to-reevaluate-billing-models/feed/ 0
How AI helped Syneos Health’s Matthew Snodgrass improve client first drafts https://www.prdaily.com/how-ai-helped-syneos-healths-matthew-snodgrass-improve-client-first-drafts/ https://www.prdaily.com/how-ai-helped-syneos-healths-matthew-snodgrass-improve-client-first-drafts/#respond Wed, 20 Nov 2024 11:00:58 +0000 https://www.prdaily.com/?p=345200 Reducing time spent parsing regulatory rules could be a gamechanger. Working through the maze of FDA, FTC and other regulations that govern communications around pharmaceuticals and other healthcare items can be challenging for even the most experienced human to handle. But an AI will never get tired, rarely get confused and can be updated with […]

The post How AI helped Syneos Health’s Matthew Snodgrass improve client first drafts appeared first on PR Daily.

]]>
Reducing time spent parsing regulatory rules could be a gamechanger.

Working through the maze of FDA, FTC and other regulations that govern communications around pharmaceuticals and other healthcare items can be challenging for even the most experienced human to handle.

But an AI will never get tired, rarely get confused and can be updated with just a few clicks of a mouse.

Matthew Snodgrass, AI innovation lead at Syneos Health Communications, is currently testing a custom GPT that will help create cleaner drafts of regulatory-compliant content – but that can never fully replace the discernment and judgment of a person.

Here’s how AI helped him.

Responses have been edited for style and brevity.

 

One of the thorniest problems for communicators in regulated industries is figuring out what the heck you can and can’t say legally. Tell me how this idea came about and how you’ve been working on this GPT.

At Syneos, the other, larger, half of our family is in clinical trials. So dealing with patient information has very strict rules and regulations that we deal with internally, very strict privacy policies, data retention and collection policies that we have. On the communication side, which is typically a little bit more free to experiment and communicate, we’re still beholden to those strict rules, which, in a way, is very good, because it puts us in the mindset of, we have to be very responsible, both from a data privacy and an ethics standpoint on how this is used.

I’ve been working with my colleagues to find out what problems do you have, what issues can be solved? Were there things that could be sped up or done better, faster? I decided to turn inward, because one of the other hats I wear is counsel on rules and regulations as it comes to pharma marketing, for rules and regs from the FDA, FTC, U.S. Code of Federal Regulations. I thought, if I can combine all of the actual regulations and rules from federal entities, along with my expertise and knowledge and interpretation of those rules, could we create a GPT that kind of mimics the interpretation of them so that we could use it to look at and analyze proposed content before it gets to the client.

What happens a lot of times is the MLR — medical legal, regulatory —  teams at pharma clients will look at a piece of content and send it back and say, ‘you can’t say this, and if you say this, then you have to say that, you can’t use this picture with this’ and so on and so forth. So if we can create a tool that helps to get ahead of that and produce a better product, just speed up the process and have us be able to scale beyond just having content flow through just one person or a couple people.

So this is not replacing human oversight. This is helping just get a cleaner draft to the client, essentially,

Exactly. You effectively hit the nail on the head of summarizing how I recommend using AI is use it as draft only trust but verify. It’s always going to need the human element to verify.

What I’m hearing is that (people) fear that AI takes over everything. And that’s not going to be the case. What I hear some clients may want is that humans are involved a little bit, but AI speeds up everything else and everything’s cheaper and quicker. And that’s not necessarily the case either. It’s going to be a mixture where we will work together with an AI on things like research, drafting things that, together with the context of a person and the speed and volume of information with an AI, you can produce a better output. We’ll hand off to AI those elements that they can just simply do better, like analysis, summaries, looking at large volumes of information and distilling it down. But we’ll keep the elements that currently only humans do well, which is strategy, creativity, content development, the truly, very human-centric elements.

Have you gotten to the point where you’re talking with clients about this GPT, and if so, what’s the reaction?

The conversations that we’re having with clients are very similar to the ones we had 15 years ago with social media. Some of them are really pushing because of internal champions to be at the forefront of experimentation and trying it out. Some are behind because they may be a small biotech that’s really focused on their research and development and just don’t have the resources to push the AI envelope yet. It’s very similar.

Have you had anyone at the other end saying, I don’t want AI on any of the materials you’re working on for us? Have you gotten that reaction?

Yes, and it’s been for different reasons. One, they’re not so sure about it. Or what I see often is they may hop into Copilot, they ask a very simple prompt that may not be a comprehensive prompt, and they get a non-comprehensive answer. They go, ‘oh, that’s not good I don’t want anybody using it.’ Or it’s the comms team that really want to push the envelope, but it might be their legal team that is not ready to let them get to that point yet, because they don’t have their ducks in a row yet.

Tell me a little bit more about how you’re going about building your regulatory GPT. What phase are you in with that process?

I would say we’re in the alpha phase right now, as we have a proof of concept built, and I’m continuing to train it. I created a 16-page missive on how I interpret FDA and FTC and U.S. Code of Federal Regulations, rules. I keep testing it with queries, and it may come back with something that’s not quite right. So then I go back to the document. Update the document, re-upload. I feel like I’m opening its brain, tinkering with it, closing it again. And then go back. I feel it’s confident enough that it can help our colleagues and help clients. Then we would unveil it as an internal usage tool. It’s getting there.

For more on the fast-changing world of AI, join us at Ragan’s AI Horizons Conference in February

The post How AI helped Syneos Health’s Matthew Snodgrass improve client first drafts appeared first on PR Daily.

]]>
https://www.prdaily.com/how-ai-helped-syneos-healths-matthew-snodgrass-improve-client-first-drafts/feed/ 0
Mastering AI: How to craft persuasive and productive prompts https://www.prdaily.com/mastering-ai-how-to-craft-persuasive-and-productive-prompts/ https://www.prdaily.com/mastering-ai-how-to-craft-persuasive-and-productive-prompts/#respond Thu, 14 Nov 2024 11:30:46 +0000 https://www.prdaily.com/?p=345152 Levar Cooper from Lake County, Florida Government kicked off Ragan’s Future of communications Conference with gen AI prompts you can use today.   Tools are only as helpful as how you use them, and generative AI tools are no different — the outputs of tools like ChatGPT are only as useful as the prompts you […]

The post Mastering AI: How to craft persuasive and productive prompts appeared first on PR Daily.

]]>
Levar Cooper from Lake County, Florida Government kicked off Ragan’s Future of communications Conference with gen AI prompts you can use today.  

Tools are only as helpful as how you use them, and generative AI tools are no different — the outputs of tools like ChatGPT are only as useful as the prompts you feed them.

Levar Cooper, communications director at Lake County Government in Florida, is optimistic about the future of communications and how automation will inform it.

“I’m on a mission to help as many people benefit from the power of AI as possible,” he told attendees Wednesday during his opening workshop at Ragan’s Future of Communications Conference.

After Cooper acknowledged the current limitations of AI, including cognitive biases, adoption barriers, and policy and regulation proposals that keep people from diving in, he shared several AI prompting tips to open Ragan’s flagship CommsWeek event.

Here’s what stuck out.

Selecting the right tools

Cooper recommends communicators resist the shiny allure of technology itself to consider how these tools actually meet their needs.

“It’s not enough just to use AI — you’ve got to have a strategy behind it,” Cooper said.

Considerations should include:

  • Business alignment. This means ensuring that the tool aligns with and supports your organization’s strategic goals.​
  • Data privacy and compliance. You should always confirm the tool meets data privacy and security standards to protect sensitive information from the outset.​
  • User experience and integration. Assessing each tool’s ability to integrate smoothly with current workflows and its ease of use will encourage buy-in across functions and move you along the adoption curve. “We often think of user experience as customer experience, but it’s really everyone at your organization who has to use it,” said Cooper.
  • Scalability and flexibility. Make sure to choose a tool that can scale with your organization and adapt to future needs. This may mean that it includes some features and functions you aren’t ready for yet, but can work toward implementing down the line.

Prompts to scale use and meet content needs

Cooper explained what you need to give AI to be successful. “When talking about the prompting identity, I give it an assignment and then give it context,” he said.

These are the prompts he’s applied successfully for each use case:

  1. Content planning. Please act as my content coordinator and create a December social media calendar for Lake County Fire Rescue’s Facebook page that leverages data-supported best practices. Incorporate national holidays and area events where practical.​”
  2. Content drafting. “Please act as my political consultant and draft a speech for the groundbreaking of a new Leslie B. Knope community center in Pawnee, Indiana in the voice of Mayor Gergich.” This is an example of how AI can reference broader events and culture, in this instance, the popular show “Parks & Recreation”.​
  3. Event planning. Please act as my event coordinator and create an event plan using the framework of the attached document for the grand opening of the new Braised Bison Bistro location in Denver, Colorado.” Cooper said that “uploading that framework allows AI to adapt to my framework, and not the other way around.”​

Working with custom prompts

Custom prompts allow you to harness the output of these tools for more strategic purposes.

“Many of these platforms allow for custom prompts, which really helps supercharge what you’re doing in a repeatable context,” Cooper said, but urged communicators to embrace the DRY mantra — that’s “don’t repeat yourself”— as a reminder to ensure your workflow is dynamic and iterative.

His tips for custom prompts include:

  • Define objectives and context. Cooper recommends clarifying the purpose of the prompt and providing relevant context such the target audience, tone and format.
  • Be specific and test iteratively. Giving your tool precise instructions and refine the prompt based on trial and error to improve results over time. The more you spell these details out, the better your tool learns them.
  • Use examples and boundaries. Including examples and specifying output constraints (those can also be tone, style or format) will help you guide the AI response to more effective outputs.
  • Break down complex tasks. For multi-phase projects, you can chain prompts in stages to build structured, aligned outputs for each part of the task.​ This will minimize the likelihood of your tool getting confused and allow you to train it at multiple points in the project.

Prompts to optimize engagement

Cooper also shared ways to get Claude to analyze data and provide insights, including:

  • Audience insights. “Please act as my strategic communications consultant and provide a sentiment analysis in the form of a report on posts related to debris collection following Hurricane Milton and include trend insights beginning on Oct. 10.​”
  • Platform insights. “Please act as a business analyst and make recommendations on the optimal times for posting content based on the provided data.​” Cooper said this business inquiry is especially powerful because it’s giving you insights that demystify algorithms and tell you why things aren’t working as well.

Cooper went deeper into using AI to craft compelling visuals, train systems on executive voice, engage internal stakeholders to move them along the adoption curve and more during his full workshop, which will be available in the coming weeks to Ragan Training members. Subscribe today!

Keep your eyes peeled from more coverage from #CommsWeek2024

The post Mastering AI: How to craft persuasive and productive prompts appeared first on PR Daily.

]]>
https://www.prdaily.com/mastering-ai-how-to-craft-persuasive-and-productive-prompts/feed/ 0
Essential questions for choosing the right AI solutions in communications https://www.prdaily.com/essential-questions-for-choosing-the-right-ai-solutions-in-communications/ https://www.prdaily.com/essential-questions-for-choosing-the-right-ai-solutions-in-communications/#respond Thu, 14 Nov 2024 10:00:45 +0000 https://www.prdaily.com/?p=345156 How to navigate the challenges of AI adoption in comms teams and make informed decisions on the right AI tools for your organization’s future success. As AI continues to shape our future, organizations are consistently exploring how to best integrate AI tools into a marketing and communications team’s day-to-day in a way to invite new […]

The post Essential questions for choosing the right AI solutions in communications appeared first on PR Daily.

]]>
How to navigate the challenges of AI adoption in comms teams and make informed decisions on the right AI tools for your organization’s future success.

As AI continues to shape our future, organizations are consistently exploring how to best integrate AI tools into a marketing and communications team’s day-to-day in a way to invite new ways of working without causing crippling disruption.

Though improving, many teams still report barriers when adopting these tools. According to a recent survey, 67% of communication professionals cite the team’s ability to use AI technology effectively as a significant barrier, while 77% struggle with the complexity of integrating new systems with their existing environments.

Key challenges to AI adoption in communication teams

One significant challenge is that PR and communication teams often lack a reliable, objective source for evaluating AI products tailored specifically to their needs, forcing them to seek costly outside help. Additionally, many team structures aren’t designed to quickly assess and act on AI adoption decisions, causing delays that may hinder their ability to stay current in the rapidly evolving landscape.

Five key areas for evaluating commstech AI providers

  1. Data Security and Quality
  • How secure is my data? Ask about encryption protocols, data handling practices and compliance with industry data security standards.
  • Which generative AI providers are used? Knowing which platforms are integrated helps assess capabilities and potential risks.
  • How frequently do you run smoke tests? Regular smoke and penetration (pen) testing is essential for data security.
  • How do you manage user permissions? Ensure user roles and permissions are manageable to prevent unauthorized access, especially if SOC compliance is needed.
  • How often is your data refreshed? Confirm how often data is refreshed and cleaned. Daily? Weekly? Hourly?

Summary: Choose a provider that ensures strong security protocols, reliable data sources and adherence to data governance standards to protect sensitive information and meet regulatory requirements.

  1. Adoption: Integration and support
  • How can the solution integrate with our existing tech stack? Seamless integration minimizes disruption and allows the product to enhance current workflows.
  • What is your onboarding and training process? Effective onboarding and training are essential for user adoption and smooth implementation.
  • What is your post-purchase support? Look for dedicated support, such as account managers or customer success teams, for ongoing assistance.
  • What are successful training practices for my staff? Insights into other customers’ training approaches can prepare your team for a successful rollout.

Summary: Providers that offer comprehensive onboarding, ongoing support and open communication will help your team overcome adoption challenges.

  1. Technology’s unique benefits
  • Who do you compete with? Understanding competitors and differentiators helps reveal the product’s unique strengths and any potential trade-offs.
  • What are your product limitations? Awareness of common challenges can help set realistic expectations.
  • What are examples of success? Request specific examples to see how the tool benefits similar businesses.
  • Can you share usage patterns? Insights into high- and low-usage patterns can guide effective adoption strategies.

Summary: Providers that are transparent about their product’s strengths and limitations, and can demonstrate successful use cases, are better positioned to meet your needs.

  1. Transparency and terms
  • Do you share product roadmaps? A product roadmap shows the provider’s commitment and transparency.
  • Do you provide clear pricing and trial options? Ensure clarity around costs and whether fees adjust based on usage. Full-access trial periods are common and should be inquired upon.
  • Do you offer Indemnification? Check if the provider offers indemnity for using generative AI technology to protect against unforeseen risks.

Summary: Providers who prioritize transparent, fair terms are well-suited for long-term partnerships.

  1. Questions to ask your own team

To ensure effective adoption, discuss internally:

  • What are our goals for using this AI tool?
  • Who will oversee the tool and manage provider communications?
  • Is IT and legal aligned with this adoption?
  • What metrics will measure the tool’s value?
  • How will we communicate our experiences internally?

Summary: No one knows your needs better than you do. Taking time to clarify these questions will help you find a suitable provider and the best plan.

 

By asking the right questions, you protect your organization, your team and your clients — so don’t hesitate to inquire deeply. If you have further questions or need additional guidance, PRophet, launched in 2021, is here to help you prepare for the future of AI in communications. Through tackling difficult challenges, we have developed a suite of safe, high-performance, and predictive AI products. Using this expertise, we’re happy to offer guidance to help inform your technology choices for 2025 and beyond.

Please reach out to us for more insights at sales@PRprophet.ai and join us Friday, November 15 at Ragan’s Future of Communications Conference to learn more during our session on “Unlocking the Future of PR: Selecting the Right AI Tools for 2025.”

 

The post Essential questions for choosing the right AI solutions in communications appeared first on PR Daily.

]]>
https://www.prdaily.com/essential-questions-for-choosing-the-right-ai-solutions-in-communications/feed/ 0
One Big AI Tip: Try ‘post brainstorming’ https://www.prdaily.com/one-big-ai-tip-try-post-brainstorming/ https://www.prdaily.com/one-big-ai-tip-try-post-brainstorming/#respond Wed, 13 Nov 2024 11:00:41 +0000 https://www.prdaily.com/?p=345137 Come up with some ideas yourself — then let AI fill in the gaps. Brainstorming is one of the most popular uses of AI for communicators. And generative AI can certainly be helpful in getting the white off the page and stretching your thinking new directions. But doing the first brainstorm yourself and then asking […]

The post One Big AI Tip: Try ‘post brainstorming’ appeared first on PR Daily.

]]>
Come up with some ideas yourself — then let AI fill in the gaps.

Brainstorming is one of the most popular uses of AI for communicators. And generative AI can certainly be helpful in getting the white off the page and stretching your thinking new directions. But doing the first brainstorm yourself and then asking AI to step in can also be helpful.

Here’s how this “post brainstorming” works.

  1. Spend 10 minutes coming up with ideas for what you’re working on. These could be angles for a pitch, survey questions or new campaign concepts.
  2. Prompt your AI tool: “I’ve come up with some ideas for (whatever you’re working on). Using these as a guide, what would you add? Please give me 10 more ideas.”
  3. See what you get. Some of the ideas might be duplicates, some might not be what you’re looking for, but a few might just help you see past your own blind spots.

Give it a try next time you need to brainstorm: you might find that giving the AI tool ideas to start with yields higher quality results.

The post One Big AI Tip: Try ‘post brainstorming’ appeared first on PR Daily.

]]>
https://www.prdaily.com/one-big-ai-tip-try-post-brainstorming/feed/ 0
AI Helped Me: Creating a tool for marketing briefs in minutes in a regulated industry https://www.prdaily.com/ai-helped-me-creating-a-tool-for-marketing-briefs-in-minutes-in-a-regulated-industry/ https://www.prdaily.com/ai-helped-me-creating-a-tool-for-marketing-briefs-in-minutes-in-a-regulated-industry/#respond Wed, 06 Nov 2024 11:00:07 +0000 https://www.prdaily.com/?p=345046 The tool can automate repetitive tasks so humans can do the fun parts. In marketing, there are so many activities that are tedious. Whether that’s updating dozens of banner ads to reflect new legal disclosures or writing a marketing brief,  AI is increasingly a way to reduce the tedium and increase the creativity in any […]

The post AI Helped Me: Creating a tool for marketing briefs in minutes in a regulated industry appeared first on PR Daily.

]]>
The tool can automate repetitive tasks so humans can do the fun parts.

In marketing, there are so many activities that are tedious. Whether that’s updating dozens of banner ads to reflect new legal disclosures or writing a marketing brief,  AI is increasingly a way to reduce the tedium and increase the creativity in any marketer’s day.

Chris Cullmann, chief innovation officer at RevHealth, and Douglas Barr, AI lead and founder at PixieDust Labs, worked together to create a tool that would cut down on some of these headaches while also understanding the unique regulatory challenges in healthcare. So AgencyOS was born.

Here’s how the idea came together – and here’s what’s next.

Answers have been edited for brevity and clarity.

How did you come up with the idea for an AI that could give a marketing brief in seven minutes?

Cullmann: In conversations with Doug, we were forecasting the role artificial intelligence is going to play into not just the search market, but the experiential market for any industry where access to accurate information is important. In our exploring ways to build out structures around accuracy, removing hallucinations and limiting the data sets a manufacturer wants access to the most relevant information about their therapies in the context of treating patients, we started talking, very casually about, what if you know we could do this? What are the challenges inside of an agency that we could leverage these same technologies for? And from that, Doug built a prototype that was fascinatingly close to the iteration that that launched us into the solution that you’re talking about.

Barr: When we started thinking about our understanding of healthcare, healthcare data and content generation, there was a perfect lead-in to generating quality, solving some of the more complex problems within healthcare and content generation.

 

 

Obviously regulatory needs were top of mind. Data privacy, I’m sure, was a huge issue. Walk me through how you addressed some of those issues.

Barr: We put safeguards into place and privacy protection, more on the code side of things, that allowed us to maintain privacy and make sure that we were compliant with all the privacy acts and HIPAA. So that was actually a very complex problem to solve … So we had to develop a couple of intellectual property models to help prevent the hallucinations from occurring and make sure it didn’t divulge any private data, things that weren’t into market. We began with using in-market, publicly available content, and Chris and the partners there allowed us to consume that type of knowledge, which lessened the restrictions on what we had to what we were getting out of it, and to make sure that again, it wasn’t out of compliancy. So that was the beginning of how we began there.

Cullmann: I also think the partnership between RevHealth and Pixie Dust allowed us to be able to explore models that worked with incremental steps to a potentially automated future, but more importantly, one where we’re allowing our team members to completely interact and inform that model.

This allows our team members, when they’re working with the platform, to review and verify the information that’s coming out of the artificial intelligence, and one that allows us to be able to have our teams augmenting this. The brief process itself, timelines for all the individual projects that go through, especially in a regulated industry like ours, all of them need due diligence. It’s incredibly repetitive, and for organizations like ours that thrive on creativity and strategy and the human spark combating the fatigue of that day-to-day work with the achievement and the creative process is one that needs to be balanced. AgencyOS allows our team members to really focus in on the creative process, strategy process and being able to interact with our clients to be able to refine what the best solution is for them. From a communications standpoint, by removing a lot of the repetitive actions, we really are creating an opportunity for much fertile exchange of ideas and to challenge those more complex jobs, to have more time and for us to explore more challenging ideas.

Barr: The other aspect of it that’s remarkable is the fact that we’re able to take that human feedback and adjustment and through what’s called a reinforcement learning, or an RL, algorithm. That feedback goes back into the model as part of the human in the loop.

So tell me, what is the output of AgencyOS like today? You put in your parameters, you get out a brief, what does that look like? How much time do you have to spend editing and refining it? What does that process look like?

Cullmann: When the internet was much, much earlier on, I think there was a lot of nuance around search. The quality of the search returns you got were very much related to how we were able to put in search. The same thing is true when we begin working with prompts. Doug’s team has built out a process inside of agency OS that automatically refines some of the prompts through an engineering process.

It will prompt you for additional information if you leave out deadlines or requirements. It understands some of the nuances of medium. It understands what an email is, what a banner ad is. It understands the requirements of Facebook, X, TikTok from a structure standpoint. It understands what marketing objectives look like. It also understands, when trained against a specific clinical claims library, those libraries of the disease state that allow a client’s product to have a unique value in the marketplace, and all of the justification for an FDA approval that’s associated with that. All of the due diligence is folded into that claims library, which means, when trained, it can not only create emails, but also be able to create emails that are pertinent to our clients’ unique value proposition in the marketplace and its unique value proposition to a physician as to how they might choose that for a specific patient, assuming the patient meets the profile.

Barr: One of the differentiators between our platform and something like ChatGPT, where ChatGPT is a single agent. What we what we’ve done with our platforms is that we have multiple versions. Each represents a separate role within an organization. We have a senior project manager, we have a creative director, we have just a straight-up project manager, we have a strategist involved, and we have a programmer. And then we can scale these. They all communicate with each other to accomplish a specific task.

Cullmann: A product may change the indication, the solution or the patient that would be approved for the FDA  for it, … we could quickly iterate on many different changes that could be through those tactics, thus changing the speed of response we can have in making this more compliant as an industry.  When you’re a person and you spent the last year working on a specific indication, it’s very hard to pivot when you’re doing a lot of these repetitive tasks. If you had to update 20 or 30 banner ads with the label information, the likelihood of a mistake dramatically increases as you go through those repetitive tasks.

So what’s next? Now that you’ve got this tool, how are you going to keep building on it? And how do you see AI helping in that?

Barr: At Pixie Dust, we actually have two challenges that we’re looking to solve in the future. The first is more immediate. We’ve demonstrated a platform to people, and the truth be told, some of them have pulled me aside and said, “the team’s terrified.” So we have to educate people to understand what it does correctly, what it doesn’t do correctly, how people are still involved and need to be involved. We actually have to spend a lot more resources into educating people on the technology, which is kind of surprising to us. We thought, we developed this, and everyone’s gonna jump on board and use it, but there’s that fear that’s involved in, what does that do to our business model?

And the second thing was, we want to focus on what are called V]vision models. So currently, large language models are exactly that. They’re language models. It essentially predicts the next word and it writes it out. But that’s only one half of the world. The other half of the world is vision based. It’s through video, or it’s through images. So vision models are models that allow us not just that creative output, like Adobe Firefly, where it generates images. What I’m really describing is how these models see and interpret the visual world around them. For example, you can upload an image of a graph and start asking the model questions about the data points in the graph from an image. We can do that work today. That world needs to be expanded upon to make better use of it.

Cullmann: As businesses begin to use this, I think there’s a lot of initial fears: we can’t put our proprietary data into the cloud for general collection. So there needs to be a lot more nuanced understanding as to data and data rules as to, this is my company’s data. This is my client’s data. This is public data, and managing that and what platform you choose to manage that are all important elements to the decision making process as to how you’re using this to the risk and reward benefit.

 

The post AI Helped Me: Creating a tool for marketing briefs in minutes in a regulated industry appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-helped-me-creating-a-tool-for-marketing-briefs-in-minutes-in-a-regulated-industry/feed/ 0
Generative AI is the creative colleague you’ve been waiting for https://www.prdaily.com/generative-ai-is-the-creative-colleague-youve-been-waiting-for/ https://www.prdaily.com/generative-ai-is-the-creative-colleague-youve-been-waiting-for/#respond Wed, 30 Oct 2024 11:00:46 +0000 https://www.prdaily.com/?p=344989 You can accomplish more together. Samantha Stark is founder and chief strategist at Phyusion.  As AI tools become available to everyone, it’s now easier than ever to produce polished materials at scale, creating new challenges for PR professionals. As more organizations leverage AI tools, we’re seeing a sea of sameness in content that makes it […]

The post Generative AI is the creative colleague you’ve been waiting for appeared first on PR Daily.

]]>
You can accomplish more together.

Samantha Stark is founder and chief strategist at Phyusion

As AI tools become available to everyone, it’s now easier than ever to produce polished materials at scale, creating new challenges for PR professionals. As more organizations leverage AI tools, we’re seeing a sea of sameness in content that makes it increasingly difficult to capture attention.

This new reality means the bar for quality content is rising. The key to better content is understanding how to use AI strategically – not just for efficiency, but to push our creative boundaries and tell better stories.

 

 

AI as your thought partner

Imagine having a tool that takes the grunt work out of content creation and inspires, challenges, and pushes your thinking further. A tool that lets you vet your hypothesis faster, brainstorm more freely, and unlock your creativity.

By engaging with AI tools through techniques like chain-of-thought prompting and directional stimulus, PR professionals can explore new angles and challenge their assumptions. Think of chain-of-thought as asking someone to show their work – have the AI break down an idea or strategy step by step. For directional stimulus, I like to play devil’s advocate, pushing the AI to consider completely different angles I might not have thought of. The key is approaching these tools not as content generators, but as partners.

Changing our research process

Rather than spending hours manually gathering and analyzing information, teams can now use AI to synthesize large amounts of data, identify patterns, and surface insights without having to tap research experts.

This shift allows us to spend more time on high-value activities like developing unique angles, pressure-testing assumptions, and crafting narratives that better resonate with our audiences.

The tools I use the most

AI tools are changing quickly, each offering unique strengths for PR teams:

  • ChatGPT has strong reasoning abilities, and with access to the internet, offers real-time current events and trends analysis, making it invaluable for developing timely narratives. Its ability to summarize large texts, reformat information into various tones and generate tailored responses makes it highly flexible for communication professionals. It excels at brainstorming using advanced prompting techniques, and its new Canvas feature allows you to edit and brainstorm with the AI directly in the document.
  • Claude excels at capturing a more authentic voice in written materials, especially when you provide brand voice guidelines, along with analyzing complex documents and providing nuanced insights. Additionally, with Projects, you can upload a lot of background context and interact with it. Its ability to process multiple sources while maintaining context makes it particularly useful for developing more complex ideas.
  • Perplexity stands out for its ability to deliver well-researched, factual responses drawing from current sources. Its powerful natural language processing engine provides nuanced insights into complex topics, making it especially valuable for content creation requiring depth. The tool’s interactive question-and-answer format helps refine queries and provides more reliable sourcing, including reference materials for easier fact-checking.
  • Google’s NotebookLM brings a unique collaborative approach to content development. Its interactive notebook-style interface is designed for collaborative brainstorming and hypothesis testing, allowing users to accumulate research materials, annotate findings, and dynamically interact with information. A particularly valuable feature is its ability to create audio summaries that sound like podcasts from uploaded content, which can accelerate learning complex topics for many.

Human judgment above all else

While AI tools provide powerful support, we need human judgment and creativity overseeing the full process. The partnership works best when AI allows us to focus on strategy, following a process such as

Start with research: We need human insight to identify relevant trends, understand context, and determine which angles will resonate with specific audiences. AI can accelerate this process, but strategic judgment comes from experience.

Point of view development: Creating a distinctive perspective requires blending industry knowledge, client understanding and cultural awareness – areas where human expertise is irreplaceable. AI can help test and refine these viewpoints, but the core insights should come from human strategists.

Content refinement: While AI can generate initial drafts, human editors ensure the content maintains authenticity, emotional resonance and brand consistency. This includes verifying facts, adjusting tone, and ensuring cultural sensitivity.

Essential considerations

While these tools offer tremendous potential, they require careful attention to two key areas: accuracy and bias. AI can make up convincing but false information, which remains a significant concern. All AI-generated content requires thorough verification, especially for sensitive communications.

Equally important is the conscious effort to identify and mitigate bias in AI-generated content. This starts with how we frame our prompts and requests. When developing content, be explicit about the need for inclusive language and diverse perspectives. Review outputs carefully for unintended stereotypes or exclusionary language. For example, when crafting corporate messages, specify the need for content that reflects diverse workplace cultures and experiences.

Looking forward

As AI tools continue to evolve, and believe me they will, to keep up we need to focus on using them strategically:

  • Using AI to identify unique angles and insights that others might miss.
  • Testing different narrative approaches quickly.
  • Focusing on developing distinctive points of view that stand out.
  • Maintaining rigorous standards for accuracy and originality.

Our value lies in fresh perspectives. In a world where people are overwhelmed by messages, success comes from focusing on what makes our brands unique and creating genuine value. That’s where the true power of AI in PR lies: in giving us the space to focus on what matters.

Samantha Stark will be one of dozens of expert presenters at Ragan’s AI Horizons Conference. Feb. 24-26 in Miami. Get more information.

 

The post Generative AI is the creative colleague you’ve been waiting for appeared first on PR Daily.

]]>
https://www.prdaily.com/generative-ai-is-the-creative-colleague-youve-been-waiting-for/feed/ 0
Using custom digital twins to better target messaging https://www.prdaily.com/using-custom-digital-twins-to-better-target-messaging/ https://www.prdaily.com/using-custom-digital-twins-to-better-target-messaging/#respond Thu, 24 Oct 2024 11:00:59 +0000 https://www.prdaily.com/?p=344912 An evolving tool for long-term strategies. The emergence of generative AI has brought significant changes to many areas of the public relations sector. One gaining traction right now is “digital twins” – virtual replicas of target personas. In essence, digital twins are similar to the character profiles used in communications and marketing plans for decades, […]

The post Using custom digital twins to better target messaging appeared first on PR Daily.

]]>
An evolving tool for long-term strategies.

The emergence of generative AI has brought significant changes to many areas of the public relations sector. One gaining traction right now is “digital twins” – virtual replicas of target personas.

In essence, digital twins are similar to the character profiles used in communications and marketing plans for decades, according to Ephraim Cohen, global managing director of media, platforms and storytelling at FleishmanHillard. The key difference is that digital twins are more dynamic in that they can use a broader range of data, which leads to better insights based on real-world results.

 

 

Cohen’s FleishmanHillard team first delved into generative AI more than five years ago as part of its ongoing effort to better understand audiences. When ChatGPT and other next gen products burst onto the scene around 2022, the agency “almost immediately started looking at digital twins” to make harvesting those insights “much faster, much easier and much more cost-effective to develop” than the traditional personas created by hand.

Jon Lombardo, co-founder of synthetic research platform Evidenza, noted that digital twins  also offer enhanced flexibility, allowing teams to test multiple messages simultaneously and compare the responses to each version.

“Things that used to take months now take literally minutes,” he said.

Developing a program

FleishmanHillard assembled a team of about 50 people to begin developing and testing its digital twin frameworks. The initial approach involved training bots on datasets related to audience behaviors, preferences and online conversations.

Cohen couldn’t go into detail about specific types, but he said FleishmanHillard’s main focus has centered on B2B and B2C audiences.

“We didn’t want to do it with real people, because there are a lot of legal and ethical ramifications there,” Cohen explained. “So we started by taking data sets on how people behaved, their favorite brands, purchase habits and online conversations, and then training bots on those data sets.”

The agency also used qualitative data sources such as academic papers, books and news clippings to build a more comprehensive understanding of their target audiences. That gave insight into things like behaviors, word choices and even their general thought processes.

The goal was not only to gain deeper audience insights, but to also create interactive tools that could assist with media relations and content strategy, Cohen said. He shared that the agency has even experimented with creating profiles of journalists and influencers to better understand how to position stories and content in a “way that resonates.”

Digital twins have made the once static persona “come to life,” Lombardo said. They can have actual names, roles and financial information, allowing for in-depth questioning. By asking the virtual person questions, they can gain the immediate feedback needed to model customer preferences, motivations and pain points.

“(Digital twins can) model the entire sample and give you a more robust view of what the market thinks,” Lombardo said. He added that Evidenza’s clients have had the most success using the platform to reach hard-to-access communities.

“Most of the people that PR people want to impress are not taking surveys or picking up the phone,” Lombardo said. “And in some ways, the only way to talk to them or model them is to use AI.”

Another area digital twins can help with, Lombardo said, is narrative and message testing – understanding how different stakeholders will respond to new campaigns or messaging. Beyond just helping to generate ideas, Lombardo advised PR pros to start asking their AI personas if they like the story angle or the messaging and why they feel that way.

Not as effective with real-time analysis

While the technology is promising, Lombardo highlighted that digital twins have limitations, particularly in real-time situations, such as crises or campaign results as they’re coming in.

“It’s very good at things that have a broader view, like research or segmentation or narrative testing,” he said. “It’s not as good at things that depend on real-time, immediate assessment.”

Cohen largely agreed with those sentiments, especially for digital twin programs just getting started.

Digital twins won’t be perfect from the start, especially when it comes to real-time processes. This is largely because most standard platforms aren’t designed for real-time use, but rather to learn from past data.

Cohen said that many of the initial results from FleishmanHillard’s trials weren’t relevant to their work because they were based on outdated information. It’s possible to train tools and keep them up-to-date, Cohen said, but a team needs to constantly feed and update them with new information. While it’s possible in theory, Cohen said that in practice, most digital twins would require a custom application to draw on real-time data.

While the technology isn’t there yet, Cohen said he feels the technology is moving in that direction. He said many augmentation technologies, like APIs, that can connect to bring in real-time data to a gen AI application.

“As it stands, we’re not there yet,” Cohen said. “But we’ll get there.”

Casey Weldon is a reporter for PR Daily. Follow him on LinkedIn.

The post Using custom digital twins to better target messaging appeared first on PR Daily.

]]>
https://www.prdaily.com/using-custom-digital-twins-to-better-target-messaging/feed/ 0
Microsoft CCO on how AI can enhance internal comms https://www.prdaily.com/frank-shaw-ragan-ai/ https://www.prdaily.com/frank-shaw-ragan-ai/#comments Wed, 23 Oct 2024 10:00:01 +0000 https://www.prdaily.com/?p=344878 Shaw shared his ‘Dream State’ internal comms workflow during Ragan’s Internal Communications Conference. The rise of generative AI technology has proven to be a reckoning point for communicators over the past few years. Whether you fear that it could come for comms jobs in the future, or that it’ll free up communicator workflows for increased […]

The post Microsoft CCO on how AI can enhance internal comms appeared first on PR Daily.

]]>
Shaw shared his ‘Dream State’ internal comms workflow during Ragan’s Internal Communications Conference.

The rise of generative AI technology has proven to be a reckoning point for communicators over the past few years. Whether you fear that it could come for comms jobs in the future, or that it’ll free up communicator workflows for increased creativity and productivity, nearly everyone has an opinion on what AI means for the future of communication.

Microsoft Chief Communications Officer Frank Shaw kicked off Ragan’s Internal Communications Conference at Microsoft HQ in Redmond, Washington by demystifying the company’s AI innovations with Microsoft Copilot and explaining what tools comms pros can implement now for smoother sailing along their communications journeys.

Applying AI to key parts of the workflow

Shaw emphasized that workflows are central to every internal communications role. The opportunity lies in identifying the parts of those workflows where AI helps fill in the gaps.

Shaw outlined a simplified internal comms journey that always begins with a directive or news item that needs sharing. He said that AI can help work on the granular bits of those processes, allowing communicators to focus on the bigger picture.

These areas include:

  • Creating a comms strategy and campaign plan.
  • Revising the plan following feedback.
  • Writing and designing content and communications.
  • Revising based on review.
  • Refining the comms based on audience feedback.
  • Analyzing and compiling a metrics report

Frank Shaw’s workflow chart, as shared at Ragan’s 2024 Internal Communications Conference. [Image courtesy of Microsoft.]

“You’ve got to break processes down to their atomic steps,” Shaw told the crowd. “There are some things that will remain uniquely human, but other areas in which you can get an assist from AI. It can help us as communicators focus on the most important things.”

Shaw explained how AI can help pick up the day-to-day rote work that can bog down a communicator’s busy schedule, enabling them to focus on the projects that have tangible impacts on their employee audience. For one, AI can pick up many messaging responsibilities that currently fall on comms pros, including reviews and refinements.

“It’s a dream space that we’re driving toward,” he said. “We’re seeing a 20 to 30% improvement in total time to task that allows people more ability to do what they want to do.”

AI as a companion through the entire comms journey

True to its name, Shaw described Copilot as an assistant that’s able to navigate the waters of internal communications. He told the audience that just like any other part of a working person’s routine, to be effective, AI usage needs to become habitual.

“We’re all here because we’re good at our jobs — but if there was a way we could build net new tools and change our workflows through AI?” Shaw said. “For that to work, we need to make AI a habit and reconfigure how it fits in.”

Beyond the more well-known tasks it can handle like content creation and editing, Shaw suggested that communicators should implement AI as a source of feedback as well, sharing how Copilot can provide alternative ways to present a message or provide an alternative point of view for ideation. You could ask for feedback on a given thesis for a piece, or ask if there are alternative ways to engage an internal audience, another set of steps on his shared comms journey.

“Think about AI as a persona, as a sparring partner, as a brainstorming buddy,” Shaw said.

When used the right way, AI can also serve as a complementary tool that helps employees feel better about their job performance. Shaw said it’s helped teams streamline their sense of productivity and sense of satisfaction at work.

“People who like their jobs tend to perform well in their jobs and stay,” he said. “With this program, we really like what we’re seeing so far.”

Along for the entirety of the comms trip

The last steps of the internal communicators’ journey involve sharing the content at hand, monitoring it for feedback and analytics, and reporting the results back to your relevant stakeholders

Even with the help of an AI assistant or agent, a communicator’s influence is central to considering the needs of people and culture — particularly a culture of opportunity and trying new things.

“When you have that culture of opportunity and excitement, people will experiment,” Shaw said. “They will try new things, share how they failed and what they learned along the way. That’s the culture we want to build with all the new tools that come our way.”

With its reusable steps, Shaw’s map of automation’s influence along the comms journey applies to nearly any situation. Reminding communicators of the rapid decisions and judgments they will make in the upcoming election, Shaw ended with a hopeful note that leaning into the workflow while thinking about these specific AI inflection points can help comms pros breathe a little easier.

“When you’re in the hot seat, it’s good to have a set of tools that can help you a little bit,” he said.

Sean Devlin is an editor at Ragan Communications. In his spare time he enjoys Philly sports and hosting trivia.

The post Microsoft CCO on how AI can enhance internal comms appeared first on PR Daily.

]]>
https://www.prdaily.com/frank-shaw-ragan-ai/feed/ 1
One Big AI Tip: Train AI on your brand voice https://www.prdaily.com/one-big-ai-tip-train-ai-on-your-brand-voice/ https://www.prdaily.com/one-big-ai-tip-train-ai-on-your-brand-voice/#respond Wed, 16 Oct 2024 11:00:03 +0000 https://www.prdaily.com/?p=344758 This new feature will highlight practical ideas that communicators can implement into AI workflows today. AI writing can often come across as flat and stale. However, with just a little extra work, artificial intelligence can learn and reproduce your brand voice. You don’t even need a paid account to do it. Start by pulling a […]

The post One Big AI Tip: Train AI on your brand voice appeared first on PR Daily.

]]>
This new feature will highlight practical ideas that communicators can implement into AI workflows today.

AI writing can often come across as flat and stale. However, with just a little extra work, artificial intelligence can learn and reproduce your brand voice. You don’t even need a paid account to do it.

Start by pulling a reference library of materials that showcase your brand voice. This could be press releases, emails, speeches — whatever showcases how your brand sounds and feels. You can also add a style guide if you have it. You can either upload documents directly or copy and paste them into the prompt field and tell your AI friend to use it as a guide.

But even with that information, you won’t get perfection off the bat. Tell the AI what it does well and what needs work, and it will learn and improve over time. Be specific in your feedback — remember, it’s essentially a clever intern at this stage.

Now, you’ll get generative writing that’s more in-line with your brand voice and style. But remember: human editing is the last and most important step for any AI endeavor.

For more ideas ranging from tactical to future-focused, join us for the first Ragan’s AI Horizons Conferences in Miami, Florida, Feb. 24-26.

The post One Big AI Tip: Train AI on your brand voice appeared first on PR Daily.

]]>
https://www.prdaily.com/one-big-ai-tip-train-ai-on-your-brand-voice/feed/ 0
How AI helped a research pro with no coding experience build a software tool https://www.prdaily.com/how-ai-helped-a-research-pro-with-no-coding-experience-build-a-software-tool/ https://www.prdaily.com/how-ai-helped-a-research-pro-with-no-coding-experience-build-a-software-tool/#respond Wed, 09 Oct 2024 10:00:56 +0000 https://www.prdaily.com/?p=344658 Reputation Leaders created a tool for categorizing open-ended survey responses — with no coding experience at all. Coding open-ended responses by hand is a time-consuming task for any PR or research professional gathering sentiment or feedback. Even with some automation, the tool is often clunky and clumsy. But Harry Morris, a project manager at Reputation […]

The post How AI helped a research pro with no coding experience build a software tool appeared first on PR Daily.

]]>
Reputation Leaders created a tool for categorizing open-ended survey responses — with no coding experience at all.

Coding open-ended responses by hand is a time-consuming task for any PR or research professional gathering sentiment or feedback. Even with some automation, the tool is often clunky and clumsy.

But Harry Morris, a project manager at Reputation Leaders, saw an opportunity to use AI to create a program to speed up the process. And with no software coding experience, he did just that, with a little support from Head of Operations David Lyndon.

Here’s how they pulled it off — and how you might do the same.

Answers have been edited for brevity and clarity.

How did you start investigating AI at Reputation Leaders?

Morris: We run a lot of surveys, we ask a lot of open text questions where people can type their responses. We use a statistics program that can automate some of it, but generally it always takes a couple of days if there are a lot of responses. And I thought that I could probably build something. I’ve always thought coding is cool, but I’ve never been able to actually do it, other than just little bits of JavaScript, which actually David taught me with his background in software engineering. But with the progress of ChatGPT, it’s code generation is starting to get very good.

I wanted it to get the list of open-ended responses and classify them into categories, but better than just picking up on groups of words, which is basically what statistical models do.

I had a direction and put it into ChatGPT. Through the process of having a back and forth with the AI, you form the idea as you’re having that conversation.

I’m saying ‘we’ as if ChatGPT was a person, but it does kind of feel like that when you’re in the process. Probably 20 minutes into that conversation, I had a well full thought-out idea of what I wanted to build. And then I said, ‘Right, we’ve got an idea. Code it.’

It spat out this long, long, jumbled list of a lot of things. And I didn’t particularly understand what was going on. I took it and put it in the Google Sheets. You can run code into the back end of Google programs, kind of similar to what you can do with Excel. I put it in, tested it and it didn’t do anything.

I thought it was going to work first time, but it completely didn’t work. As soon as I had a problem though, I could grab the error, grab the snippet of code, put it back into GPT, iterate, test, and then fix each part. That took a bit longer, but after a couple of hours it worked pretty much as I wanted, which blew my mind a little, because this was the first time we had tried anything like this.

 

 

Walk me through the ‘conversation’ you had with ChatGPT defining your idea. What was it like brainstorming with a robot?

Morris: I didn’t start with anything too complicated. It was a couple of lines, — I wanted it to take in a list, give me some themes, and spit out some categories. Then go back and put each response into those themes. It came back and questioned me, asking, ‘Have you thought about this? Have you thought about the length of the categories? Have you thought about how many there should be? Have you thought about X, Y and Z?’ I could move through that conversation and cherry pick the bits that I liked.

You had very little experience in coding, yet you did troubleshooting on code. How did that work?

Morris: Most code runners will point out the row that isn’t working, and because I knew what I wanted at the end, I could add in checks to see, for example: “right, at this point, what is this variable?’

Throughout that whole process, if I didn’t know what I was doing, ChatGPT has an understanding of the programming languages, so I can tell it in basic English what I need, i.e. ‘I want to add a stop that tells me what variable A is at this point in the code’, and then I put that back into the code runner, and then it runs it again. And you build an understanding through that process.

David, you do have experience in coding and with software. What did you make of this process?

Lyndon: Yes, it’s funny listening to Harry talk about it, actually, because, ‘code runner,’ for example, that’s not language that any software engineer is ever going to use. But of course, it makes absolute sense in normal speech, and AI allows you to do that.

When Harry came to me with the with the output, and showed me what he’d done, part of me was just hugely impressed, because it was a piece of software that worked from beginning to end and did exactly what he wanted it to do.

Another part of me was hugely frustrated, because if I want to do that, I’ve got these skills that mean I could do it, but actually it would have taken me a long time, even with all the skills that I’ve built up over 25 years. I would still have had to go through that process of coding and testing. So getting to see the end result and saying, wow, Harry’s actually just persevered through this process, and it allowed him to get from concept to product in a reasonably short amount of time is hugely impressive.

So could you ballpark how long this would have taken you to do the old-fashioned way?

Lyndon: If I had taken this and I just had time to code it myself, I would say six to eight hours.

And Harry about how long did it take you to do this?

Morris: I’d say, to get it to the point where I showed David, it was probably double that.

Lyndon: Yeah, but I have 25 years’ experience. So it’s 25 years plus six to eight hours.

Once you had this tool in working condition and you deployed it, how has this impacted your workflows?

Morris: We did the calculations for it. It probably cost us £60 pounds in costs, then man hours between £700 and £1000, so all-in, just over £1,000 UK to build this. Using this tool would be about 20 times faster than doing the process manually.

In a project last week, I grabbed some comments from an article and ran it through the code. It took me five minutes to add, but we talked through it at length during the final client presentation, because they thought it was useful.

So this is really changing your client product in a lot of ways.

Lyndon: It’s increasing quality. It’s increasing the depth of the analysis that we can do. It’s saving us time, yeah, but it’s not forcing us to change the way that we work, the way that we do things.

Where do you where do you go from here, now that you have this foundation?

Lyndon: I think the limit right now is imagination. So what can you imagine this doing? And then go and test it. Sometimes it’ll work and sometimes it won’t, but right now it’s so new. We don’t know what those guidelines are. We don’t know what those barriers are. We don’t know what the limits are.

The post How AI helped a research pro with no coding experience build a software tool appeared first on PR Daily.

]]>
https://www.prdaily.com/how-ai-helped-a-research-pro-with-no-coding-experience-build-a-software-tool/feed/ 0
6 ways communicators can influence the AI budgeting process https://www.prdaily.com/6-ways-communicators-can-influence-the-ai-budgeting-process/ https://www.prdaily.com/6-ways-communicators-can-influence-the-ai-budgeting-process/#respond Mon, 07 Oct 2024 10:00:43 +0000 https://www.prdaily.com/?p=344629 Your team will be using AI. Here’s how to take some control. This is part three in Ragan’s series on budgeting for communicators. Read part one here and  part two here. When you gain influence, you secure budget. Similarly, the rapidly-accelerating applications of AI open unrealized opportunities for communicators to influence the safe, responsible and […]

The post 6 ways communicators can influence the AI budgeting process appeared first on PR Daily.

]]>
Your team will be using AI. Here’s how to take some control.

This is part three in Ragan’s series on budgeting for communicators. Read part one here and  part two here.

When you gain influence, you secure budget. Similarly, the rapidly-accelerating applications of AI open unrealized opportunities for communicators to influence the safe, responsible and practical implementation of the tech across business lines.

AI will benefit those closest to the business, and becoming closer to the business is the best way to gain influence. To this end, it stands to reason that communicators should seek to allocate spend for operational AI tools.

This requires documenting and demonstrating impact, which is hard to do with new tech when you lack benchmarks and baselines. In the absence of this, Catherine Richards, founder of Expera Consulting and AI coach to Ragan Communications Leadership Council members, suggests focusing on the strengths and differentiations that are unique to comms.

“The differentiation is a trust catalyst,” Richards said, “because communications generally leads the relationships with investors, analysts and media.”

“Relationships are your secret sauce,” she continued. “Other functions don’t have those, and so communicators can build that trust. You have to be transparent there. You guard the reputation. Many times you are the navigator for the ethics.  Lean in there.”

Ragan and Ruder Finn’s recent survey of AI in internal communications underscores how important trust is to scaling AI implementation across the business —  50% of senior communications leaders said data privacy and fake news were their top concerns for working with AI, while 48% of the C-suite cited resistance from key stakeholders as a barrier.

Other top concerns included the idea that communication overload would result in misinformation (41%), the loss of personal touch and humanity in communication (37%) and the lack of internal expertise and resources (35%).

With trust as your guide, it’s easier to secure more influence over setting budget for operational AI tools with strategy that weaves in stakeholders across the business, prioritizing transparency while connecting the value of these tools back to organizational goals.

Here’s how to start making the case:

1. Show ROI and business impact.

EY’s recent study on AI investments found that senior leaders at organizations investing in AI are seeing tangible results across the business, with those investing seeing the most positive ROI around operational efficiencies (77%), employee productivity (74%) and customer satisfaction (72%).

These are all metrics that communicators can, and should, track.

Quantifying the benefits of AI for communications starts with establishing a baseline (and not comparing yourself to industry benchmarks just yet). Start by documenting how GenAI content tools improve productivity and tracking time saved on mundane manual tasks.

When testing AI tools to better target specific employee or stakeholder segments and customize your outreach, measure the open and click-through rates of those AI-assisted messages against similar messages sent before you started using the tool.

With these numbers in hand, create a simple correlation model to show how your tools have positively impacted KPIs including engagement rates, employee sentiment and resonance of executive messaging while saving the comms team time.

A correlation model between time saved and the cost of that time, which breaks down everyone’s salary to hourly rates and compares that to hours gained, may be worthwhile too.

Being open with your team about what you’re tracking and positioning this correlation as an accountability measure to grow and scale, will center the exercise around trust.

 2. Align your metrics and models to business objectives.

Aligning your content and editing efficiencies with broader organizational goals like revenue growth connects your efforts to the business easier when you can explain how that time saved is being applied to future projects.

While partnering with IT, Finance and Marketing can help you allocate comms budget for cross-departmental projects and collaborations, it can also help you pull metrics around customer satisfaction and brand positioning that may not already live on your internal measurement dashboard.

Turning other departments into joint advocates for AI investments requires explaining how the tools you want complement technologies other teams are using and improve workflow automation across the business.  In turn, this grounds the relationship in trust and creates mutual accountability.

3. Present your strategy.

A clear and detailed implementation plan that includes guidelines spells out every team’s rules of play and creates visibility of ownership along the way.

Your plan should include specific use cases, timelines and measurable outcomes that each functional owner is responsible for tracking.

Your plan can also demystify the AI budget process by doubling as a roadmap to show how incremental investment can lead to long-term results.

Positioning a move away from free tools to secure AI tools is a solid first step that you can position around risk mitigation. Training on the investment secured is a logical second step.

Why is training crucial here?  Our survey with Ruder Finn found that around half (53%) of C-suiters aged 43 and under said they were satisfied with the AI training they received compared with 42% of C-suiters aged 44 and over. But a much wider training gap exists between the C-suiters surveyed and other communicators.

This gap emphasizes the need for more personalized training resources, which can build trust and scale implementation at the same time.

PwC leads the way here with its innovative training exercises, including a feedback loop between comms and product team to ensure the process is iterative and collaborative.

4. Educate leaders and decision-makers.

When comms takes an early adopter mentality to research and responsibly experiment with emerging AI tech, it’s easier to educate internal stakeholders on how AI works and its tangible benefits to the business.

Be prepared to answer questions about costs, security and integration by sharing case studies from other communications leads who have found success at scale. Visit Orlando’s Adeta Gayah scaled her social media team’s operations with automated image tagging and GenAI research ideation, while PwC’s Gabrielle K. Too-A-Foo uses the firm’s tools to process large data sets and standardize SEO procedures.

Taking an educational approach by pointing to case studies not only reinforces trust — it also positions you as a leader in the process.

“That leadership voice can come from anywhere in the organization, and it’s somebody who has courage, who’s willing to be vulnerable and say, ‘I’m gonna test this out. I’m gonna take a risk,” Richards said.

5. Launch pilot programs to document quick wins.

After aligning with leadership expectations, propose pilot programs that can test these tools with a small cohort and designated owners before expanding them out to wider teams.

These pilots should be focused on producing quick, measurable results. During her time at VMWare, Richards collaborated with engineers to document marketing use cases while working with GenAI tools Jasper and Writer.

Hotwire Global gamified the pilot process by challenging more than 400 employees to create custom GPTs and empowering all functions to pilot their own use cases in the process.

“We received many of awesome, just mind-blowing examples that we never even thought of, from hilarious to very useful,” Anol Bhattcharya, managing director, marketing service: APAC for Hotwire told Ragan.

“[This includes] some awesome internal process development tools, some of them client-facing, which we are developing further now. It’s not only the AI — any   comms and marketing agency’s innovation should look like this: give them the tools, teach them basics and get out of the way, rather than trying to mold it too much.”

6. Share industry trends and competitor insights.

It’s often said that comparison is the thief of joy, but the organizations that implement and scale AI responsibly will gain an advantage over their industry competitors. To that end, it’s crucial to emphasize where and how AI is being implemented in communication strategies across your industry.

A recent CNBC Technology Executive Council bi-annual survey found that, among companies spending on AI, “roughly four times as many are investing in employee-facing AI projects rather than customer apps.”

Meanwhile, Ragan and Ruder Finn’s survey shows what industries are using AI the most, with the aerospace, aviation and transportation industry reporting the highest daily use.


Just under half of respondents surveyed in the manufacturing and technology industry (49%) are using AI daily—less than those in the education, government and nonprofit spaces. Unsurprisingly, heavily regulated industries like healthcare and finance have the smallest adoption rates

Staying up on our research on AI in communications, and other fields, helps you point toward broader trends in the communication space that position comms as forward-thinking innovators.

In turn, this positions your proposed investments as less of a luxury and more of a necessity.

 Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

Additional resources on securing comms budgets, including our recently released budget report, are available exclusively to members of Ragan’s Communications Leadership Council. Learn more about joining here.

The post 6 ways communicators can influence the AI budgeting process appeared first on PR Daily.

]]>
https://www.prdaily.com/6-ways-communicators-can-influence-the-ai-budgeting-process/feed/ 0
How AI helped Catherine Richards amplify empathy https://www.prdaily.com/how-ai-helped-catherine-richards-amplify-empathy/ https://www.prdaily.com/how-ai-helped-catherine-richards-amplify-empathy/#respond Wed, 25 Sep 2024 10:00:03 +0000 https://www.prdaily.com/?p=344480 The founder of Richards Creative on how AI can boost creativity and collaboration. AI’s sudden rise has left communicators with many questions about its capabilities and what it means for them in their roles. With its ability to create copy, summarize documents, and much more, AI tools offer a significantly underutilized array of applications that […]

The post How AI helped Catherine Richards amplify empathy appeared first on PR Daily.

]]>
The founder of Richards Creative on how AI can boost creativity and collaboration.

AI’s sudden rise has left communicators with many questions about its capabilities and what it means for them in their roles. With its ability to create copy, summarize documents, and much more, AI tools offer a significantly underutilized array of applications that communicators should embrace.

In our latest edition of Ragan’s “AI Helped Me” series, we spoke with Catherine Richards,  founder of Richards Creative, about her early exposure to  AI, how she uses it to help her clients unlock their creativity, and more.

Sean Devlin: Could you tell us a little about how you first started interacting with AI in your role, and how it’s evolved?

Catherine Richards: My journey with AI began at VMware in 2022-2023 when I joined one of the world’s first Marketing AI Councils. We were pioneering the responsible integration of generative AI into a global marketing organization, essentially writing the playbook as we went. As a content strategist, I piloted the early enterprise-grade generative AI tools Jasper and Writer, quickly realizing how AI was set to transform content marketing radically.

This laid the foundation for my role as an AI strategist. Today, through Expera Consulting and The Strategist Blog, I help clients use AI to achieve their most ambitious goals. While the tools and technologies have evolved rapidly, the core principles we established early on — responsible implementation, amplifying human strengths, and fostering inclusive collaboration — remain as relevant and important to success today as ever.

When you first started using AI, how did you educate yourself on how to use it?

CR: As a strategist and creator, I believe in learning by doing. When our AI council provided licenses for Jasper and Writer, I jumped in immediately, experimenting and providing feedback to developers. While formal courses are helpful, nothing compares to hands-on experience.

This led to the creation of The Strategist Blog, where I share insights and document my AI journey. My approach has always been rooted in curiosity—engaging with the tools, exploring their potential, and sharing discoveries.

Catherine Richards, founder of Richards Creative

How does AI factor into your role today?

CR: I work with highly skeptical audiences who demand credibility over hype. My background in security, privacy, and regulation marketing informs the risk-aware approach I take with AI, ensuring any guidance I provide balances progress and compliance.

A key part of my role is helping clients understand how AI can augment human capabilities, not replace them. This often means rethinking workflows and team structures and developing new use cases.

I suggested a home improvement retailer use AI to calculate environmental impact metrics, empowering contractors to make more informed, eco-friendly recommendations. By incorporating AI insights, contractors can improve their decision-making, reinforcing the value of their expertise.

I also focus on making AI approachable. It’s not about insider tricks — it’s about practical solutions to real-world challenges. As AI-generated content becomes more common, I help teams view AI as a creative collaborator, enabling them to deliver high-quality, differentiated outputs.

Have you seen any changes to your workflow or customer satisfaction since you’ve begun using AI?

CR: Definitely. AI has become a creative catalyst for me. Early in my career, I relied on libraries, museums, and travel for inspiration. Now, AI provides a limitless library of references, broadening my creative scope. I recognize that AI models have inherent biases and limitations, as they are trained on imperfect data. While I’m mindful of this when using AI, it still offers vast resources that were unimaginable just five years ago.

What’s something about AI that you think communicators need to be talking about but aren’t discussing enough?

AI can analyze vast amounts of sentiment and preference, enabling us to understand our audience better and craft more authentic communication. This doesn’t replace human empathy — it amplifies it. The key is using AI to enhance how we engage and connect with our audiences.

Do you have a big prediction for AI usage in the next few years?

CR: We’ll start to see a shift in what’s valued at work — strategic and creative problem-solving, as well as unconventional thinking, will become increasingly important. Brands will need to dig deeper to find more authentic expressions of their mission and points of view.

The good news is that AI will democratize access to information and tools, enabling individuals to bring their unique perspectives and talents to the forefront, regardless of traditional experience or formal training. This will give brands access to a broader range of diverse insights.

This shift will require organizations to rethink how they structure teams, moving toward flexible, project-based setups. I see a future where organizations train and provide professional services to a more diverse set of people to speak on behalf of the brand.

For younger professionals, your digital fluency positions you to lead in this AI-powered world. Dive into AI smartly, understand its limits, and lead with confidence. For those with a career’s worth of expertise, AI can unlock new ways to apply that knowledge, sparking an exciting new chapter in your career.

Sean Devlin is an editor at Ragan Communications. In his spare time he enjoys Philly sports and hosting trivia.

The post How AI helped Catherine Richards amplify empathy appeared first on PR Daily.

]]>
https://www.prdaily.com/how-ai-helped-catherine-richards-amplify-empathy/feed/ 0
Closing the AI gap in internal communications https://www.prdaily.com/closing-the-ai-gap-in-internal-communications-between-buzz-and-actual-use/ https://www.prdaily.com/closing-the-ai-gap-in-internal-communications-between-buzz-and-actual-use/#respond Tue, 24 Sep 2024 14:00:41 +0000 https://www.prdaily.com/?p=344467 Ruder Finn and Ragan’s study, “The Great AI Divide in Internal Communications” identifies an AI implementation gap between priorities and adoption rates.  This past spring, Ragan partnered with Ruder Finn’s internal communications arm, rf.engage to learn how communicators implement AI, and how they plan to use the tech to advance their internal communications work in […]

The post Closing the AI gap in internal communications appeared first on PR Daily.

]]>
Ruder Finn and Ragan’s study, “The Great AI Divide in Internal Communications” identifies an AI implementation gap between priorities and adoption rates. 

This past spring, Ragan partnered with Ruder Finn’s internal communications arm, rf.engage to learn how communicators implement AI, and how they plan to use the tech to advance their internal communications work in the future.

Ruder Finn and Ragan’s “The Great Divide in Internal Communications” report surveyed communicators in North America and the U.K. across all levels of seniority and a vast range of industries. The results identify clear gaps between how AI is perceived and its application to internal comms efforts.

“Change of this magnitude is not straightforward, so it’s no surprise that gaps are appearing as organizations come to grips with how these technologies can deliver transformational benefits,” said Ruder Finn CEO Kathy Blooomgarden. “The key to success is to remember that any business solution must bring people along, underpinned by communications, and be linked directly to thoughtful integration within existing ways of working.”

Hearing the AI triumphs and challenges in the Ragan community has taught us that the opportunities to become AI champions are vast—opening new pathways for communicators to serve as strategic advisors who mitigate risk by crafting governance policies and setting guidelines, holding up the communications potential of this tech to spread the influence of comms across the business.

A closer look at the largest gaps reveals where, and how, comms can secure that influence.

Understanding the gap between priorities and usage

Ragan and Ruder Finn’s research found that communicators recognize AI’s potential but lag in implementing it. On average, the report found a 16% difference between top internal comms priorities and the extent to which AI is used for those priorities.

While 57% surveyed consider executive messaging and positioning a top priority, just 34% are using AI tools to streamline their exec comms. Communicators can train a secure generative AI tool like ChatGPT4 to write in the style of their executives by providing examples of past messages, describing attributes of the executive’s voice such as tone, formality and sentence structure, and even spelling out words, phrases and language to avoid. Vetting any drafts with relevant stakeholders, including the executive team and counsel, will inspire further confidence to scale this process and close the gap.

As exec comms is an underutilized AI use case, overcommunicating the drafting and editing processes also builds trust between comms and senior leaders to remind them of the human element and EQ required for the messages to resonate.

Similarly, 56% of communicators consider employee engagement a top priority, but only 39% use AI to assist with employee engagement. While that implementation rate is slightly higher than with executive communications, a 17% gap still exists.

A genAI tool can also help draft internal newsletters, memos and announcements consistent in an agreed-upon brand voice. This can even be harnessed to personalize onboarding materials for new employees that tailor key information about policies and values to each hire’s specific role.

AI-powered sentiment analysis tools, meanwhile, can interpret open-ended pulse survey answers or social sentiment from intranet posts, analyzing the language to craft a summary of how employees feel about a new change or initiative.

Whether comms sit under the HR function or not, championing these use cases positions with an early adopter mindset positions communications in an advisory capacity that strengthens trust to close the gap.

Of course, this is contingent on effective training around each new AI tool and use case. A closer look at adoption level by seniority can inform your approach to scaling training.

Seniority adoption gaps underscore training opportunities

The research found that C-suite communicators are twice as likely to use AI than less senior internal communicators — 83% of C-suiters surveyed said they use AI daily, compared with 41% of senior and mid-level respondents.

C-suiters were also 25% more optimistic about AI in internal comms than their less senior counterparts.

This cohort is not a monolith. Segmenting the C-suite by age found that 100% of C-suiters 43 and under use AI daily compared to just 58% of C-suiters age 44 and over.

These discrepancies in daily use and optimism can be solved with training. How’s that going so far?

While around half (53%) of C-suiters aged 43 and under said they were satisfied with the AI training they received compared with 42% of C-suiters aged 44 and over, a much wider training gap exists between the C-suiters surveyed and other communicators.

Just under a quarter (24%) of C-suiters said they were satisfied with their organization’s training, while the number of satisfied senior and mid-level comms pros was just 8%. Concerning as those numbers are, the dissatisfaction shouldn’t be mistaken for disengagement — 64% of communicators across all seniority levels said they want to learn more about AI’s applications for internal communications.

Putting this all together, we’re looking at a C-suite sample that’s more comfortable using AI for internal communications tasks, and happier with the training being offered, than others who sit in the function.

Considered alongside the paltry level of training satisfaction across the board, this makes sense — those who don’t consider their level of training to be sufficient are less willing to dive in.

While demonstrating comfort with ambiguity is a valuable leadership competency, the risk management remit of internal comms pros, coupled with the myriad reports on what happens when AI implementation scales irresponsibly, may explain the gap between a desire for training and satisfaction with the training received.

This begs the question of how specific and detailed the AI training that communicators currently is. Are you surveying your team to address the root causes and concerns driving apprehension? While most training includes a focus on human-centered prompt creation using generative AI tools to draft executive messages and employee engagement content, your training can also go much further to explore things like:

  • Launching and sustaining an effective cross-departmental AI task force.
  • Shaping AI governance and crafting internal guidelines for cross-functional use cases that prioritize transparency and security amid the latest regulatory developments.
  • Streamlining employee experience comms around recruitment, engagement analytics and the intranet.
  • Building a blueprint for successful AI implementation that aligns communicators on each step, from committees to execution.

These insights are a reminder that comfort levels and competence are not one and the same. The most effective upskilling programs are personalized to each employee’s role and preferred style of learning.

Learning those preferences from the outset, and then training your communications function in kind, will ensure that they are empowered and equipped to bring the rest of your workforce along the adoption curve.

For more on the internal comms AI gaps among various industries and company sizes, check out the full report here.

Ruder Finn will unpack the results during Ragan’s Internal Communications Conference, Oct. 16-18 at Microsoft HQ in Seattle, WA. Register now!

The post Closing the AI gap in internal communications appeared first on PR Daily.

]]>
https://www.prdaily.com/closing-the-ai-gap-in-internal-communications-between-buzz-and-actual-use/feed/ 0
By the Numbers: These are the best AI tools for marcomm tasks, according to Edelman https://www.prdaily.com/best-ai-tools-for-marcomm-tasks-according-to-edelman/ https://www.prdaily.com/best-ai-tools-for-marcomm-tasks-according-to-edelman/#comments Thu, 19 Sep 2024 10:00:20 +0000 https://www.prdaily.com/?p=344429 See how major tools stack up for writing, analysis and more. New AI tools seem to pop up, mushroom-like, on a daily basis. Taking the time to figure out which of this colony of tools is best for which task can feel like way more work than just doing the tasks by hand. At PR […]

The post By the Numbers: These are the best AI tools for marcomm tasks, according to Edelman appeared first on PR Daily.

]]>
See how major tools stack up for writing, analysis and more.

New AI tools seem to pop up, mushroom-like, on a daily basis. Taking the time to figure out which of this colony of tools is best for which task can feel like way more work than just doing the tasks by hand.

At PR Daily, we’ve put several AI tools through their paces. Now Edelman is out with a new report that does the same, putting major LLMs Microsoft 365 Copilot, ChatGPT Enterprise, Writer, Claude and Gemini through their own tests.

The results? No one tool dominates, though the big three – Copilot, ChatGPT and Gemini – all turn in strong performances, with different strengths.

Here’s what the Edelman report found.

 

 

The best AI tool for writing

The big three tools finished in a dead heat in the writing category, Edelman found. ChatGPT Enterprise was best at processing large amounts of information, making it a solid choice for more in-depth writing needs. Gemini, thanks to its integration with Google, was adept at pulling the last recent information to inform the piece. And Copilot’s integration with Microsoft helped the AI tool maintain a consistent brand voice across its copy, as it had reams of data to pull from across the Microsoft 365 suite.

This test shows the importance of the tools’ parent companies. Both Copilot and Gemini had a leg up because of their broader connections to the tech ecosphere – something that can be difficult for startups to compete with.

The best AI tool for research

Given the first test’s results, it’s not surprising Gemini also scored strongly in the research category, in this case earning kudos for its ability to not only retrieve but prioritize information it gleans from Google’s giant search engine.. It shared the podium with Copilot, thanks again to its ability to access and return data from within Microsoft 365 products.

The best AI tool for ideation

AI can be a solid tool for looking past our own biases and getting the big picture. Copilot again took a nod in the category, this time sharing the spotlight with ChatGPT. Both tools excelled at looking at the full scope of the conversation, using  past interactions to provide better outputs, more similar to how a real brainstorming partner might act. Edelman’s testers also enjoyed the ease of natural language prompting and the ability to integrate – there’s that word again – with existing workflow tools across the organization.

The best AI tool for synthesis

Edelman defines synthesis as “recalling specific data points from extensive information sets and transforming this data to create informative, easily-understood summaries.” Both Copilot and ChatGPT excelled in this arena, taking the most important ideas from dense text, which made them both an able partner for transforming those ideas into other materials, like press releases or campaigns. Copilot’s integrations allowed it the deepest well to draw from for its synthesis.

The best AI tool for design

Design was the only category with a clear winner, no double-podium here. Thanks to its use of DALL-E 3, ChatGPT stood head and shoulders above the rest, Edelman found. They praised the tool for its ability to interpret prompts precisely, expand on creative briefs and feed existing workflows.

The best AI tool for analysis

Analysis in this case means processing data to draw insights. This was the first and only category in which upstart Writer earned special praise. “Writer transforms raw data into coherent and relevant narratives, producing detailed reports and summaries that highlight key insights and trends,” the report says.

Meanwhile, ChatGPT shined when it was asked to do sentiment analysis, such as press coverage or social media. Copilot was at its best when it could use its integrations, such as with Excel or Power BI, to feed its work.

The bottom line

The major AI players currently are the best, this analysis found. Microsoft’s existing dominance in the tech industry means that it’s already so deeply embedded into most people’s work lives that Copilot just adds a new layer of depth and ease to tools most of us already know. ChatGPT made “generative AI” a household phrase and continues to raise the bar. Gemini’s writing abilities and access to Google’s massive search engine data stockpile make it a formidable player. But the other tools may not be ready for enterprise-level use just yet.

These are early days yet for generative AI. More changes will come; tools will rise and fall in ability and popularity. Continue to experiment and monitor the horizon for new trends and changes.

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

The post By the Numbers: These are the best AI tools for marcomm tasks, according to Edelman appeared first on PR Daily.

]]>
https://www.prdaily.com/best-ai-tools-for-marcomm-tasks-according-to-edelman/feed/ 2
AI for communicators: What’s new and what matters https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-11/ https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-11/#respond Wed, 18 Sep 2024 09:00:52 +0000 https://www.prdaily.com/?p=344422 A new OpenAI model was unveiled and California passes new AI regulations. AI tools and regulations continue to advance at a startling rate. Let’s catch you up quick. Tools and business cases AI-generated video continues to be a shiny bauble on the horizon. Adobe has announced a limited release of Adobe Firefly Video Model later […]

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
A new OpenAI model was unveiled and California passes new AI regulations.

AI tools and regulations continue to advance at a startling rate. Let’s catch you up quick.

Tools and business cases

AI-generated video continues to be a shiny bauble on the horizon. Adobe has announced a limited release of Adobe Firefly Video Model later this year. The tool will reportedly offer both text and image prompts and allow users to specify the camera angle, motion and other aspects to get the perfect shot. It also comes with the assurance that itis only trained on Adobe-approved images, and thus will come without the copyright complications some other tools pose.

The downside? Videos are limited to just 5 seconds. Another tool, dubbed Generative Extended, will allow the extension of existing clips through the use of AI. That will be available only through Premiere Pro. 

Depending on Firefly Video’s release date, this could be one of the first publicly available, reputable video AI tools. While OpenAI announced its own Sora model months ago, it remains in testing with no release date announced. 

 

 

And just as AI video is set to gain traction, Instagram and Facebook are set to make its labeling of AI-edited content less obvious to the casual scroller. Rather than appearing directly below the user’s name, the tag will now be tucked away in a menu. However, this only applies to AI edited content, not AI generated content. Still, it’s a slippery slope and it can be difficult to tell where one ends and the other begins.

Meta has also publicly admitted to training its LLM on all publicly available Facebook and Instagram posts made by adults, dating all the way back to 2007. Yes, that means your cringey college musings after that one philosophy class were used to feed an AI model. While there are opt-outs available in some areas, such as the EU and Brazil, Facebook has by and large already devoured your content to feed the voracious appetite of AI models. 

OpenAI, creator of ChatGPT, has created a new tool, OpenAI o1, that focuses on math and coding prompts. OpenAI says the tool spends“more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.”

While this high-end, scientifically focused tool may not be a fit for most communicators, other departments  may use these tools – which means communicators will be in charge of explaining the how and why of the tech internally and externally. 

In a quirkier use of AI, Google is testing a tool that allows you to create podcasts based on your notes. It’s an outcropping of notetaking app NotebookLM, creating two AI-generated “hosts” who can discuss your research and draw connections. According to The Verge, they’re fairly lifelike, with casual speech and enough smarts to discuss the topic in a way that’s interesting. This could be a great tool for creating internal podcasts for those with small budgets and no recording equipment. 

On a higher level, the Harvard Business Review examined the use of AI to help formulate business strategy. It found that the tool, while often lacking specifics on a business, is useful for identifying blind spots that human workers may miss. For instance, the AI was prompted to assist a small agricultural research firm identify what factors may impact their business in the future:

However, with clever prompting, gen AI tools can provide the team with food for thought. We framed the prompt as “What will impact the future demand for our services?” The tool highlighted seven factors, from “sustainability and climate change” to “changing consumer preferences” and “global population growth.” These drivers help Keith’s team think more broadly about demand.

In all cases, the AI required careful oversight from humans and sometimes produced laughable results. Still, it can help ensure a broad view of challenges rather than the sometimes myopic viewpoints of those who are entrenched in a particular field. 

OpenAI o1 will be a subscription tool, like many other high-end models today. But New York Magazine reports that despite the plethora of whizz-bang new tools on the market, tech companies are still trying to determine how to earn back the billions they’re investing, save a standard subscription model that’s currently “a race to the bottom.” 

ChatGPT has a free version, as do Meta and Google’s AI models. While upsell versions are available, it’s hard to ask people to pay for something they’ve become accustomed to using for free – just ask the journalism industry. But AI investment is eye-wateringly expensive. Eventually, money will have to be made.

Nandan Nilekani, co-founder of Infosys, believes that these models will become “commoditized” and the value will shift from the model itself to the tech stack behind it.

This will be especially true for B2B AI, Nilekani said.

“Consumer AI you can get up a chatbot and start working,” he told CNBC. “Enterprise AI requires firms to reinvent themselves internally. So it’s a longer haul, but definitely it’s a huge thing happening right now.” 

Regulation and risk 

The onslaught of new LLMs, tools and business use cases makes mitigating risk a priority for communicators in both the government and private sector.

As omnipresent recording artist Taylor Swift made headlines last week after endorsing Vice President Kamala Harris for president, she explained that the Trump campaign’s use of her likeness in AI deepfakes informed her endorsement.

“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” Swift wrote on Instagram. “It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.” 

This isn’t the first time that Swift has been subjected to the damage AI deepfakes– earlier this year, fake pornographic images of Swift were widely circulated on X.  

Last week, the Biden-Harris administration announced a series of voluntary commitments from AI model developers to combat the creation of non-consensual intimate images of adults and sexually explicit material of children. 

According to the White House, these steps include:

    • Adobe, Anthropic, Cohere, Common Crawl, Microsoft, and OpenAI commit to responsibly sourcing their datasets and safeguarding them from image-based sexual abuse. 
    • Adobe, Anthropic, Cohere, Microsoft, and OpenAI commit to incorporating feedback loops and iterative stress-testing strategies in their development processes, to guard against AI models outputting image-based sexual abuse.  
    • Adobe, Anthropic, Cohere, Microsoft, and OpenAI, when appropriate and depending on the purpose of the model, commit to removing nude images from AI training datasets.

While these actions sound great on paper, the lack of specifics and use of phrases like “responsibly sourcing” and “when appropriate” raise the question of who will ultimately make these determinations, and how a volunteer process can hold these companies accountable to change.

Swift’s words, meanwhile, underscore how much the rapid, unchecked acceleration of AI use cases exists as an existential issue for voters in affected industries. California Gov. Gavin Newsom understands this, which is why he signed two California bills aimed at giving performers and other artists more protection over how their digital likeness is used, even after their death.

According to Dateline:

A.B. 1836 expands the scope of the state’s postmortem right of publicity, including the use of digital replicas, meaning that an estate’s permission would be needed to use such technology to recreate the voice and likeness of a deceased person. There are exceptions for news, public affairs and sports broadcasts, as well as for other uses like satire, comment, criticism and parody, and for certain documentary, biographical or historical projects.

The other bill, A.B. 2602, bolsters protections for artists in contract agreements over the use of their digital likenesses. 

Newsom didn’t yet move on Bill SB 1047, though, which includes rules that require AI companies to share their plans to protect against manipulation of infrastructure.  He has until Sept. 30th to sign, veto or allow these other proposals to become law without his signature. Union SAG-AFTRA, the National Organization for Women and Fund Her all sent letters supporting the bill.

This whole dance is ultimately an audience-first exercise that will underscore just who Newsom’s audience is – is it his constituents, the big tech companies pumping billions into the state’s infrastructure, or a mix of both? The power of state governments to set a precedent that the federal government can model national regulation around cannot be understated. 

However Newsom responds, the pressure from California arrives at a time when Washington is proposing similar regulations. Last Monday, the U.S. Commerce Department said it was considering implementing detailed reporting requirements for advanced AI developers and cloud-computing providers to ensure their tech is safe and reliant against cyberattacks.

Reuters reports:

The proposal from the department’s Bureau of Industry and Security would set mandatory reporting to the federal government about development activities of “frontier” AI models and computing clusters.

It would also require reporting on cybersecurity measures as well as outcomes from so-called red-teaming efforts like testing for dangerous capabilities including the ability to assist in cyberattacks or lowering barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons.

That may explain why several tech executives met with the White House last week to discuss how AI data centers impact the country’s energy and infrastructure. The who’s-who list included Nvidia CEO Jensen Huang, OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei and Google President Ruth Porat along with leaders from Microsoft and several American utility companies.

Last month, Altman joined the Washington lobbying group Business Software Alliance, reported Semafor. The global group pushes a focus on “responsible AI” for enterprise business, a buzzword evangelized in owned media white papers across the world. 

Microsoft provides the most recent example of this, explaining its partnership with G42, an AI-focused holding group based in Abu Dhabi, as an example of how responsible AI can be implemented in the region.

Last week, Altman left OpenAI’s safety board, which was created this past May to oversee critical safety decisions around its products and operations. It’s part of the board’s larger commitment to independence, transparency and external collaboration. The board will be chaired by current OpenAI board members including Carnegie Mellon professor Zico Kolter, Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and ex-Sony EVP Nicole Seligman. 

Understood through the lens of a push for independence, Altman’s leaving the board soon after joining a lobbying group accentuates the major push and pull between effective internal accountability and federal oversight–companies. Voluntary actions like signing orders or publishing white papers are one way of showing ‘responsible AI use’ while still allowing companies to avoid more stringent regulation

Meanwhile, several pioneering AI scientists called for a coordinated global partnership to address risk, telling The New York Times that “loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity.” This response would empower watchdogs at the local and national levels to work in lockstep with one another.

We’re already seeing what a regulatory response looks like amid reports that Ireland’s Data Protection Commission is investigating Google’s Pathways Language Model 2 to determine if its policies pose a larger threat to individuals represented in the datasets. 

While a coordinated effort between the EU and the US may seem far-fetched for now, this idea is a reminder you have the power to influence regulation and policy at your organization and weigh in on the risks and rewards of strategic AI investments, before anything is decided at the federal level.

That doesn’t always mean influencing policies and guidelines, either. If a leader is going around ike Oracle co-founder Larry Ellison and touting their vision for expansive AI as a surveillance tool,  you can point to the inevitable blowback as a reason to vet their thought leadership takes first.

Positioning yourself as a guardian of reputation starts with mitigating risk. That’s when starting conversations around statements like Ellison’s surveillance state take or Altman’s resignation from OpenAI’s safety board forms a foundation for knowledge sharing that shapes sound best practices and empowers your company to move along the AI maturity curve responsibly. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-11/feed/ 0
5 new AI roles for PR pros https://www.prdaily.com/5-new-ai-roles-for-pr-pros/ https://www.prdaily.com/5-new-ai-roles-for-pr-pros/#comments Mon, 16 Sep 2024 10:00:08 +0000 https://www.prdaily.com/?p=344383 AI is expanding options for PR pros. Jennifer Jones-Mitchell offers team trainings and AI consulting through Human Driven AI.   AI won’t take your job. Someone who knows how to use AI will. This is how I begin my AI trainings and consulting sessions. And it is true. If you aren’t upskilling in generative AI, […]

The post 5 new AI roles for PR pros appeared first on PR Daily.

]]>
AI is expanding options for PR pros.

Jennifer Jones-Mitchell offers team trainings and AI consulting through Human Driven AI.  

AI won’t take your job. Someone who knows how to use AI will. This is how I begin my AI trainings and consulting sessions. And it is true. If you aren’t upskilling in generative AI, you will fall behind. But there is an even more compelling trend when it comes to AI and jobs: new employment opportunities for PR professionals. Yes, you read that right. For all the talk about AI taking jobs, it is also creating new ones. Here are the five new AI roles for PR pros:

  1. AI Trainer. This can cover two areas: training people to use AI ethically and effectively and/or training the actual AI models. I have a colleague who has twenty-five years of corporate communications experience. She no longer works in PR. She’s now employed by the world’s largest tech company, teaching their AI models to communicate like humans.

As companies and agencies continue to develop custom LLMs, or AI Agents, they need to teach these models to communicate with employees, customers and all stakeholders effectively. The more human-like AI behaves, the better the models perform overall.

Requirements: In addition to knowing how to structure training data for AI systems, this role requires an understanding of brand messaging, offerings, customer personas, and their needs, so the AI engages stakeholders with on-brand communication. In other words, it is the perfect position for an experienced PR professional. I expect we will see many folks pivoting to this kind of role.

  1. Prompt Engineer. As prompting becomes increasingly more complex, this is a critical position for PR pros. In fact, the job itself is a natural progression for many of us. Most people write a single prompt and run with that output. But when you understand how to talk to AI models, you can create multi-step prompt sequences that achieve multi-step tasks. You can even train AI models to remember these sequences so other team members can run them when needed.

Requirements: While this role requires an understanding of how to talk to Generative AI, I predict a need for vertically aligned prompt engineers. The prompts you create for highly regulated industries like healthcare, banking and insurance differ significantly from the prompts you’d create for a CPG brand versus a B2B brand and so on. There is a need for experts who understand specific vertical markets and can craft compelling prompts that serve those markets.

In fact, PR agencies should plan to have prompt engineers on staff by 2025. This will be critical to keep pace with this evolving technology while ensuring agency teams utilize Generative AI ethically and effectively.

  1. Generative Design Specialists. This position is like a prompt engineer but focuses on visual design. I’ve seen this role gain popularity across fields like architecture, product design and engineering.

Requirements: The way in which you instruct image generators differs from the way you instruct text generators. Understanding these differences is critical for design-focused professionals. Similarly, there is an increasing need for a category-specific understanding of how to direct Generative AI to achieve common design tasks, including complex prompt sequences. Because Generative AI can be used to create limitless design variations, experts are needed to develop the precise prompts and refine outputs to achieve optimal designs based on the categories they serve.

  1. Input/Output Managers. This role is one level up from a prompt engineer and it’s more of a strategic position that oversees the information your teams are uploading into AI models and the quality of the outputs the models deliver.

As companies and agencies continue to grapple with considerations around data privacy, copyright, AI explainability and bias, this quality control position ensures employees adhere to policies and guidelines. This position should be prepared to audit all forms of content – including written articles, visual designs, and analysis reports. Human reviewers are critical assess the quality, accuracy, and appropriateness of content. So, while we will undoubtedly see Generative AI used to create more content, we still need humans to ensure that content is fit for the intended purpose.

Requirements: The position requires someone with a real attention for detail and an ability to translate business requirements into technical specifications. The best candidates possess excellent written and verbal communication skills and have experience creating documentation and user guides, making this a terrific role for PR professionals. The role would conduct tasks like random spot-checks on what teams are inputting into AI – including the prompts they’re using, and to ensure they aren’t sharing proprietary IP with AI systems. The position also reviews AI-generated outputs to ensure consistent and effective human review and revision. In other words, they make sure AI creates the first draft, not the final draft.

  1. AI Personality Designer. This fun role will be in-demand in 2025. As more companies release new Generative AI models and brands create custom LLMs, it will be increasingly necessary for differentiation.

Professionals will teach AI models to possess distinct personalities that separate them from the competition. This could be an LLM with a sarcastic sense of humor, or one that offers motherly guidance, or an LLM that understands how to connect with the LGBTQ community. We’ve already seen similar examples of this. Last year, Latimer.ai hit the scene, an LLM trained exclusively on content from the HBCUs to help brands connect more authentically to people of color.

As the market floods with different AI models, differentiation through unique personalities will be key. And there is no one better to train these models than PR professionals who already know how to write, speak and create in the distinct voices and personalities of various brands.

The bottom line is this: AI is changing not only the way we do our work, but the work we do. Generative AI delivers opportunities for PR professionals to automate common tasks, augment skills gaps, drive personalization at scale and enhance audience engagement. But AI is just a tool. It still requires humans to drive it. And humans are creating new jobs to do just that.

The post 5 new AI roles for PR pros appeared first on PR Daily.

]]>
https://www.prdaily.com/5-new-ai-roles-for-pr-pros/feed/ 1
How AI helped Hotwire Global unleash ‘chaos’ and creativity in the best way https://www.prdaily.com/how-ai-helped-hotwire-global-unleash-chaos-and-creativity-in-the-best-way/ https://www.prdaily.com/how-ai-helped-hotwire-global-unleash-chaos-and-creativity-in-the-best-way/#respond Wed, 11 Sep 2024 08:04:59 +0000 https://www.prdaily.com/?p=344322 A company-wide challenge opened new pathways of discovery. Often, AI programs are carefully managed by a select group of workers who proceed carefully and cautiously with this promising yet dangerous technology. But sometimes, you need to inject a little chaos into the proceedings. Hotwire Global offered a splash of training and then unleashed more than […]

The post How AI helped Hotwire Global unleash ‘chaos’ and creativity in the best way appeared first on PR Daily.

]]>
A company-wide challenge opened new pathways of discovery.


Often, AI programs are carefully managed by a select group of workers who proceed carefully and cautiously with this promising yet dangerous technology.

But sometimes, you need to inject a little chaos into the proceedings.

Hotwire Global offered a splash of training and then unleashed more than 400 people from across departments to create wild new custom GPTs for the business.

The results blew away Anol Bhattcharya, managing director, marketing service: APAC for Hotwire.

Here’s how the challenge came together – and what came out of this grand experiment.

Answers have been edited for brevity and clarity.

 

How are you interacting with AI?

I’m using them personally, professionally all the time. So multiple different things, starting from GPTs to Claude to building new RAG or retrieval augmented generations.

How did you come up with the idea for the AI challenge?

Every agency is forming a team or hiring new people to have this lab of innovation using AI and gen AI. And then also another part which everybody is trying to do is putting the guardrail of governance. We are also doing that. But I think this (contest) is something totally different, and I haven’t come across that from anywhere else.

That is, what if we give the (AI) tool to everybody in the organization, give a little bit of training, bare minimum training, and not give too much instruction, and just ask them to build something? And when I say people in the company, it’s not only just the techies, but the people who are comms specialists, even the finance person, just to see what they come up with.

And (we) conducted a few hands on trainings, like how to create some custom GPTs, and showed some real world examples of some of the GPTs we created for clients and things like that – and let it go.

We received many of awesome, just mind-blowing examples that we never even thought of, from hilarious to very useful. Some awesome internal process development tools, some of them are client facing, which we are developing further now. It’s not only the AI, it’s any comms and marketing agency’s innovation should look like this: give them the tools, teach them basics and get out of the way, rather than trying to mold it too much. So really cool things happened.

 

 

Why do you think the results were so good?

Imagine as a child, right? Your imaginations are limitless. Sometimes our education kind of stifles us, and that’s what happened.

Because they were not aware of all the restrictions and strategy and all those things, they just focused on something which they wanted to do.

Let me give you the silliest example. Jeremy’s team came up with “ideas that can get you fired.” That means you ask for a campaign idea. You give them this kind of a campaign, this kind of a company, give them a campaign idea and it will always give you a politically incorrect answer, like the weirdest idea. But this became our brainstorming tool, because it’s always good to start with pushing the envelope and coming back. Some of them are just funny, like one team came up with a relationship advisor, which only speaks in Taylor Swift songs lyrics. But some really, really useful things came up.

What was the winner?

The winner was an RFI helper. We have a huge database of RFI already submitted. And as an account manager, you’re building something, you can just use that as a RAG and query, and it gives you the answer.  Everyone, account managers and directors, just loved it – ‘We want to roll this out right now!’ We had to stop them so they let us put the guardrails on so that it doesn’t hallucinate and all those things. But this idea came out of nowhere. We haven’t thought of that as an AI team, but it came from the ground level. That’s what excited me the most.

They submitted this AI custom GPT as a resume. Her name is Tricia, and she can do multiple things. She can find out RFI help. She can find out who is the expert where — like, I am doing a cyber security comms planning, who I can reach out to, who’s got experience, tapping into our capability metrics and finding the resource internally on that. We got blown away by that.

Out of the 40 ideas submitted, how many will you continue to develop?

I will say at least eight. Of course, we have to find time to fine tune this. That’s always the thing — people have their day job. The use case is definitely there to be used, either for internal process or for client facing ones.

You said in a LinkedIn post talking about this contest that “culture eats strategy for breakfast.” What did you mean by that?

Sometimes you just need the chaotic energy to breed innovation. So this is the example of just releasing the chaos in the world and let them do anything with the tool, without putting too much restriction on.

There were pictures getting posted, and all people wanted to be part of one team or another. Slack is buzzing. So two things happened. Not only these tools came out suddenly, like 400 people around the world are interested in using AI in their work. It’s a sudden burst of energy from everyone to use that culturally. We changed the company’s perception to just not talk about AI, just not read about AI, it’s just use the AI to do your things. So something amazing happened.

What advice would you give organizations wanting to do a contest like this?

I will say that there is a basic, minimal understanding required. So train the trainer. This spread like wildfire: Ignite the thing in one place. Train a few people. Then make them the evangelist. They will go and push that.

 

The post How AI helped Hotwire Global unleash ‘chaos’ and creativity in the best way appeared first on PR Daily.

]]>
https://www.prdaily.com/how-ai-helped-hotwire-global-unleash-chaos-and-creativity-in-the-best-way/feed/ 0
How I Got Here: PRophet Founder Aaron Kwittken on unlocking AI for smarter stakeholder engagement https://www.prdaily.com/how-i-got-here-prophet-founder-aaron-kwittken-on-unlocking-ai-for-smarter-stakeholder-engagement/ https://www.prdaily.com/how-i-got-here-prophet-founder-aaron-kwittken-on-unlocking-ai-for-smarter-stakeholder-engagement/#respond Fri, 06 Sep 2024 10:00:27 +0000 https://www.prdaily.com/?p=344281 Aaron Kwittken shares about the toughest moment of his career. Aaron Kwittken, founder and CEO of PRophet, is a seasoned communications pro with over 30 years of experience, starting with public affairs on the hill and then transitioning to tech entrepreneurship. He founded PRophet, the first AI-powered platform for PR professionals, after a successful career […]

The post How I Got Here: PRophet Founder Aaron Kwittken on unlocking AI for smarter stakeholder engagement appeared first on PR Daily.

]]>
Aaron Kwittken shares about the toughest moment of his career.

Aaron Kwittken, founder and CEO of PRophet, is a seasoned communications pro with over 30 years of experience, starting with public affairs on the hill and then transitioning to tech entrepreneurship. He founded PRophet, the first AI-powered platform for PR professionals, after a successful career leading a global PR and brand strategy firm, KWT. 

The thing I’m most excited about for the future of my profession is: 

Being able to bring the power of AI to revolutionize stakeholder engagement. With AI, we can better predict which stakeholders will be most receptive to a company or brand’s content – journalists, creators, influencers, prospects, employees, competitors, and the list goes on. We are finally in a position to be able to unlock data to backstop our instincts and replace the guessing game with knowledge when it comes to strategy, creativity and engagement to maximize impact.  

A tool or a piece of software I cannot live without is: 

I can’t live without Peak Metrics,an AI-powered narrative monitoring and analysis platform. 

And of course, The PRophet Suite to improve pitch performance and influencer engagement.  

Someone who has helped me be successful in my career is:  

There are a few individuals that come to mind for me. I am very grateful for Joe Gleason, Bob Feldman and David Gallagher’s support over the years. They have been my professional sounding board and have given me the confidence and inspiration to take on new ventures. Bob and David are what I consider to be my “professional Rabbis.”  

One piece of advice I would give other people in my profession is: 

I have several pieces of advice:  

  • It’s important to prioritize your physical and mental health.  
  • Stay curious to stay ahead. 
  •  Never stop innovating.  
  • Always remember that excellence and comfort can’t co-exist — comfort is a career killer.   

One way I maintain my work-life balance is: 

Spending time with my human and canine family and staying physically and mentally fit through open water swimming, long-distance running and cycling, hot yoga and daily meditation.  It’s crucial for me to prioritize these things to maintain the balance.  

The toughest moment in my career was: 

During the height of COVID, I managed my agency, KWT, launched PRophet, and served as president of my temple—all with a clear goal: to keep as many people employed as possible, safeguard everyone’s mental and physical health, and position each entity for long-term success amid the storm of panic and uncertainty. 

The most challenging moment during that time was the murder of George Floyd. While I was proud of our agency’s commitment to DEIB, it became clear that we needed to evolve further in both our culture and hiring practices. I was also shocked by the weak and uninformed responses from some clients when advising them on communication strategies during that period. 

 Fast forward to today, the Israel-Hamas war has sparked a pervasive, yet often silent, anxiety, especially with the rise of antisemitism and hate toward the Jewish community. I have been outspoken and believe it’s crucial for the non-Jewish community to show stronger allyship and stand with us. 

Isis Simpson-Mersha is a conference producer/ reporter for Ragan. Follow her on LinkedIn.

 

The post How I Got Here: PRophet Founder Aaron Kwittken on unlocking AI for smarter stakeholder engagement appeared first on PR Daily.

]]>
https://www.prdaily.com/how-i-got-here-prophet-founder-aaron-kwittken-on-unlocking-ai-for-smarter-stakeholder-engagement/feed/ 0
AI for communicators: What’s new and what matters https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-10/ https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-10/#respond Wed, 04 Sep 2024 09:30:35 +0000 https://www.prdaily.com/?p=344251 A beloved social media tool skyrockets in price due to AI; California passes groundbreaking regulation bill. The recent Labor Day holiday has many of us thinking about how AI will impact the future of work. There are arguments to be made about whether the rise of the tech will help or hurt jobs – it’s […]

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
A beloved social media tool skyrockets in price due to AI; California passes groundbreaking regulation bill.


The recent Labor Day holiday has many of us thinking about how AI will impact the future of work. There are arguments to be made about whether the rise of the tech will help or hurt jobs – it’s a sought-after skill for new hires, but one company is using AI as a pretext for cutting thousands of roles. And in the short-term, the rapid expansion of technology is making at least some tools used by workers more expensive.

Here’s what communicators need to know about AI this week.

Tools

Many tech companies continue to go all-in on AI – and are charging for the shiny new features.

Canva, a beloved tool of social media managers, has ratcheted up prices up to 300% in some cases, The Verge reported. Some Canva Teams subscribers report prices leaping from $120 per year for a five-person team to $500. Some of those lower prices were legacy, grandfathered rates, but nonetheless, it’s an eye-watering increase that Canva attributes in part to new AI-driven design tools. But will users find that worth such a massive price increase? 

Canva’s price hikes could be a response to the need for companies to recoup some of their huge investments in AI. As CNN put it after Nvidia’s strong earnings report nonetheless earned shrugs: “As the thrill of the initial AI buzz starts to fade, Wall Street is (finally) getting a little more clear-eyed about the actual value of the technology and, more importantly, how it’s going to actually generate revenue for the companies promoting it.” 

While Canva seems to be answering that question through consumer-borne price hikes, OpenAI is trying to keep investment from companies flowing in. It’s a major pivot for a company founded as a nonprofit that now requires an estimated $7 billion per year to operate, compared to just $2 billion in revenue. Some worry that the pursuit of profits and investment is coming at the expense of user and data safety. 

Meanwhile, Google is launching or relaunching a number of new tools designed to establish its role as a major player in the AI space. Users can once again ask the Gemini model to create images of people – an ability that had been shut down for months after the image generator returned bizarre, ahistorical results and appeared to have difficulties creating images of white people when asked. While it’s great to have another tool available, Google’s AI woes have been mounting as multiple models have proven to be not ready for primetime upon launch. Will new troubles crop up? 

Google is also expanding the availability of its Gmail chatbot, which can help surface items in your inbox, from web only to its Android app – though the tool is only available to premium subscribers.

While using AI to search your inbox is a fairly understandable application, some new frontiers of AI are raising eyebrows. “Emotion AI” is when bots learn to read human emotion, according to TechCrunch. This goes beyond the sentiment analysis that’s been a popular tool on social media and media monitoring for years, reading not just text but also human expressions, tone of voice and more. 

While this has broad applications for customer service, media monitoring and more, it also raises deep questions about privacy and how well anyone, including robots, can actually read human emotion. 

Another double-edged sword of AI use is evidenced use of AI news anchors in Venezuela, Reuters reports. 

As the nation launches a crackdown on journalists after a highly disputed election, a Colombian nonprofit uses AI avatars to share the news  without endangering real people. The project’s leader says it’s to “circumvent the persecution and increasing repression” against journalists. And while that usage is certainly noble, it isn’t hard to imagine a repressive regime doing the exact opposite, using AI puppets to spread misinformation without revealing their identity or the source of their journalism to the world.

 

 

Risks 

Many  journalism organizations aren’t keen for their work to be used by AI models – at least not without proper pay. Several leading news sites have allowed for their websites to be crawled for years, usually to help with search engine rankings. 

Now those same robots are being used to feed LLMs and news sources, especially paywalled sites, then locking the door by restricting where on their sites these bots can crawl

Apple specifically created an opt-out method that allows sites to continue to be crawled for existing purposes – think search – without allowing the content to be used in AI training. And major news sites are opting out in droves, holding out for specific agreements that will allow them to be paid for their work.

This creates a larger issue. AI models are insatiable, demanding a constant influx of content to continue to learn, grow and meet user needs. But as legitimate sources of human-created content are shut off and AI-created content spreads, AI models are increasingly trained on more AI content, creating an odd content ouroboros. If it trains too much on AI content that features hallucinations, we can see a model that becomes detached from reality and experiences “model collapse.”

That’s bad. But it seems in some ways inevitable as more and more AI content takes over the internet and legitimate publishers (understandably) want to be paid for their work.

But even outside of model collapse, users must be vigilant about trusting today’s models. A recent case of weird AI behavior went viral this week when it was found that ChatGPT was unable to count how many times the letter “R” appears in “strawberry.” It’s three, for the record, yet ChatGPT insisted there were only two. Anecdotally, this reporter has had problems getting ChatGPT to accurately count words, even when confronted with a precise word count. 

It’s a reminder that while technology can seem intelligent and confident, it’s often confidently wrong. 

Kevin Roose, tech columnist for the New York Times, also discovered this week just how difficult it is to change AI’s mind about something. In this case, the subject was himself: Roose rocketed to fame last year when Microsoft’s AI bot fell in love with him and tried to convince him to leave his wife. 

As a result, many AI models don’t seem too keen on Roose, with one even declaring, “I hate Kevin Roose.”

But changing that viewpoint was difficult. Roose’s options were getting websites to publish friendly stories showing that he wasn’t antagonistic toward AI (in other words, public relations) or creating his own website with friendly transcripts between him and chatbots, which AI models would eventually crawl and learn. A quicker and dirtier approach involved leaving “secret messages” for AI in white text on his website, as well as specific sequences designed to return more positive responses.

On the one hand, manipulating AI bots is likely to become the domain of PR professionals in the near future, which could be a boon for the profession. On the other hand, this shows just how easily manipulated AI bots can be – for good and for evil.

And even when used with positive intent, AI can still return problematic results. A study featured in Nature found that AI models exhibited strong dialect prejudice that penalizes people for their use of African American Vernacular English, a dialect frequently used by Black people in the United States. “Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death,” the study finds.

This is what happens when technology is trained on so much human writing: it’s going to pick up the flaws and prejudices of humans as well. Without strong oversight, it’s likely to cause major problems for marginalized people. 

 Finally, there is debate over what role AI is having in the U.S. presidential elections. Former president Donald Trump himself appeared to be taken in by a deepfake where Taylor Swift endorsed him (no such thing ever happened), sharing it on his Truth Social platform. AI is being used by both camps’ supporters, sometimes to generate obviously fake imagery, such as Trump as a body builder, while some are more subtle. 

But despite its undeniable presence in the election, it isn’t clear that AI is actually reshaping much in the race. State actors, such as Russia, are using the tools to try to manipulate the public, yes, but a report from Meta indicated that the gains were incremental and this year’s election isn’t significantly different from any other in regards to disinformation. 

But that’s only true for now. Vigilance is always required. 

Regulation

While some continue to question the influence of deepfakes on our democratic process, California took major steps last week to protect workers from being exploited by deepfakes.

California Assembly Bill 2602 was passed in the California Senate and Assembly last week to regulate the use of Gen AI for performers, including those on-screen and those who lend their voices or bodily likeness to audiobooks and videogames. 

While the bipartisan support the bill enjoyed is rare, rarer still is the lack of opposition from industry groups, including the Motion Picture Association, which represents Netflix, Paramount Studios, Sony, Warner Bros. and Disney, according to NPR

The bill also includes rules that require AI companies to share their plans to protect against manipulation of infrastructure. 

NPR reports:

The legislation was also supported by the union SAG-AFTRA, whose chief negotiator, Duncan Crabtree-Ireland, points out that the bill had bipartisan support and was not opposed by industry groups such as the Motion Picture Association, which represents studios such as Netflix, Paramount Pictures, Sony, Warner Bros., and Disney. A representative for the MPA says the organization is neutral on the bill.

Bill S.B. 1047 also advanced. That bill would require AI companies to share safety proposals to protect infrastructure against manipulation, according to NPR.

The AP reports:

“It’s time that Big Tech plays by some kind of a rule, not a lot, but something,” Republican Assemblymember Devon Mathis said in support of the bill Wednesday. “The last thing we need is for a power grid to go out, for water systems to go out.”

The proposal, authored by Democratic Sen. Scott Wiener, faced fierce opposition from venture capital firms and tech companies, including OpenAI, Google and Meta, the parent company of Facebook and Instagram. They say safety regulations should be established by the federal government and that the California legislation takes aim at developers instead of targeting those who use and exploit the AI systems for harm.

California Democratic Governor Gavin Newsom has until Sept. 30th to sign, veto or allow these proposals to become law without his signature. This puts all eyes on Newsom to either ratify or kill the potential laws that multiple stakeholders have different perspectives on. 

Given the opposition from major California employers like Google, there is a chance Newsom vetoes S.B. 1047, Vox reported

And while tech giants oppose California’s Bill S.B. 1047, we have a hint at what they’d like to see happen at the federal level instead.

Last Thursday, the U.S. AI Safety Institute announced it had come to a testing and evaluation agreement with OpenAI and Anthropic, according to CNBC, that allows the institute to “receive access to major new models from each company prior to and following their initial public release.” 

Established after the Biden-Harris administration’s executive order on AI was issued last fall, the Institute exists as part of the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST).

According to the NIST:

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

Additionally, the U.S. AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute. 

If this public-private partnership agreement seems vague on details and methodology, that’s because it is. The lack of detail underscores a major criticism that Biden’s executive order was light on specifics and mechanisms for enforcement. 

The outsized push from big tech to settle regulation at the federal level makes sense when one considers the outsized investments most major companies have made in lobbyists and public affairs specialists.

“The number of lobbyists hired to lobby the White House on AI-related issues grew from 323 in the first quarter to 931 by the fourth quarter,” reports Public Citizen.  

For communicators, this push and pull is a reminder that regulation and responsible use must start internally – and that, whatever happens in California by the end of the month, waiting for tangible direction from either federal or state governments may be a path to stalled progress.

Without some required reporting and oversight, regulators will continue to struggle with the pace of AI developments. But what would responsible safety measures look like in practice?

A recent report from the Financial Times looks at the EU’s AI Act, which was ratified this past spring, to answer this question. The report notes that the AI Act ties systemic risk to the power of computing metrics, and says this won’t cut it.

According to FT:

The trouble is that this relates to the power used for training. That could rise, or even fall, once it is deployed. It is also a somewhat spurious number: there are many other determinants, including data quality and chain of thought reasoning, which can boost performance without requiring extra training compute power. It will also date quickly: today’s big number could be mainstream next year. 

When the efficacy and accuracy of a risk management strategy depends largely on how you measure potential risks, agreeing on standardized parameters for responsible reporting and sharing of data remains an opportunity.

While many consider the EU’s AI Act a model that the rest of the world will follow (similar to Global Data Protection Regulation or GDPR), the recent push in California suggests that the state’s outsized investments in AI are propelling it to lead by example even faster. 

AI at work

While thinking about how to deploy AI responsibly often comes back to secure internal use cases, a recent report from Slingshot found that nearly two-thirds of employees primarily use AI to double-check their work. That’s higher than the number of workers using AI for initial research, workflow management and data analysis.

“While employers have specific intentions for AI in the workplace, it’s clear that they’re not aligned with employees’ current use of AI. Much of this comes down to employees’ education and training around AI tools,” Slingshot Founder Dean Guida said in a press release. 

This may account for a slight dip in US-based jobs that require AI skills, as measured by Stanford University’s annual AI Index Report. 

The report also looked at which AI skills were most sought after, which industries will rely on them the most and which states are leading in AI-based jobs.

The Oregon Capital Chronicle sifted through the report and found:

Generative AI skills, or the ability to build algorithms that produce text, images or other data when prompted, were sought after most, with nearly 60% of AI-related jobs requiring those skills. Large language modeling, or building technology that can generate and translate text, was second in demand, with 18% of AI jobs citing the need for those skills.

The industries that require these skills run the gamut — the information industry ranked first with 4.63% of jobs while professional, scientific and technical services came in second with 3.33%. The financial and insurance industries followed with 2.94%, and manufacturing came in fourth with 2.48%.

California — home to Silicon Valley — had 15.3%, or 70,630 of the country’s AI-related jobs posted in 2023. It was followed by Texas at 7.9%, or 36,413 jobs. Virginia was third, with 5.3%, or 24,417 of AI jobs.

This outsized presence of generative AI skills emphasizes that many jobs that don’t require a technical knowledge of language modeling or building will still involve the tech in some fashion.

The BBC reports that Klarna plans to get rid of almost half of its employees by implementing AI in marketing and customer service. It reduced its workforce from 5,000 to 3,800 over the past year, and wants to slash that number to 2,000.

While CIO’s reporting frames this plan as Klarna “helping reduce payroll in a big way,” it also warns against the risk associated with such rapid cuts and acceleration:

Responding to the company’s AI plans, Terra Higginson, principal research director at Info-Tech Research Group, said Wednesday, “AI is here to enhance employee success, not render them obsolete. A key trend for 2025 will be AI serving as an assistant rather than a replacement. It can remove the drudgery of mundane, monotonous, and stressful tasks.”

“(Organizations) that are thinking of making such drastic cuts should look into the well-proven productivity paradox and tread carefully,” she said. “There is a lot of backlash against companies that are making cuts like this.”

Higginson’s words are a reminder that the reputational risk of layoffs surrounding AI is real. As AI sputters through the maturity curve at work, it also reaches an inflection point. How organizations do or don’t communicate their use cases and connections to the talent pipeline will inevitably shape their employer brand.

This is also a timely reminder that, whether or not your comms role sits in HR, now is the time to study up on how your state regulates the use of AI in employment practices. 

Beginning in January 2026, an amendment to the Illinois Human Rights Act will introduce strict guidelines prohibiting AI-based decisions on hiring or promotion. Such behavior is framed as an act of discrimination.

This builds on the trend of the Colorado AI Act, which more broadly focused on the public sector when it was signed into law this past May, and specifically prohibits algorithmic discrimination for any “consequential decision.”

While you work with HR and IT partners to navigate bias in AI, remember that training employees on how to use these schools isn’t just a neat feature of your employer brand, but a vital step to ensure your talent is trained to keep your business competitive in the market.

BI reports:

Ravin Jesuthasan, a coauthor of “The Skills-Powered Organization” and the global leader for transformation services at the consulting firm Mercer, told BI that chief human-resources officers and other leaders would need to think of training — particularly around AI — as something that’s just as important as, for example, building a factory.

“Everyone needs to be really facile with AI,” he said. “It’s a nonnegotiable because every piece of work is going to be affected.”

He said experimenting with AI was a good start but not a viable long-term strategy. More organizations are becoming deliberate in how they invest, he added. That might look like identifying well-defined areas where they will deploy AI so that everyone involved uses the technology.

Jesuthasan’s words offer the latest reminder that comms is in a key position to coordinate experimentation efforts and investments in tech with an allocated investment in training that includes not only a platform for instruction and education, but time itself -– dedicated time for incoming talent to train on the tools and use cases during onboarding and dedicated time for high-performers to upskill.

Treating this as an investment with equal weight will ultimately enhance your employer brand, protect your reputation and future-proof your organization all at once.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

The post AI for communicators: What’s new and what matters appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-for-communicators-whats-new-and-what-matters-10/feed/ 0
How to stop worrying and smartly embrace AI https://www.prdaily.com/ai-presents-risks-opportunities-for-pr-pros/ https://www.prdaily.com/ai-presents-risks-opportunities-for-pr-pros/#respond Wed, 28 Aug 2024 10:00:43 +0000 https://www.prdaily.com/?p=344194 Education key to breaking down AI-related fears. Knowing how AI models work is crucial to unlocking their potential and improving workflows. Yet, many business leaders are approaching generative and discriminating AI them with great caution. Kara Fisher, head of reputation insights for Signal AI, said she believes that’s due in part to a lack of […]

The post How to stop worrying and smartly embrace AI appeared first on PR Daily.

]]>
Education key to breaking down AI-related fears.

Knowing how AI models work is crucial to unlocking their potential and improving workflows. Yet, many business leaders are approaching generative and discriminating AI them with great caution.

Kara Fisher, head of reputation insights for Signal AI, said she believes that’s due in part to a lack of education about the subject. She cited a recent survey of 3,400 C-suite executives in which 76% saw generative AI, such as ChatGPT, as more of an opportunity than a threat. However, the kicker is that nearly the same number, 72%, indicated that they’re investing with more apprehension. Fears about using AI range from legal risks to worries about output accuracy.

“This is indicative of the real questions and fears that many of us have about how these technologies will impact our day-to-day lives,” Fisher, told a packed crowd during PR Daily’s recent Media Relations Conference.

During her 30-minute talk, Fisher explored ways to ease some worries when developing frameworks for using those tools. A central part of that process is education about the specific mechanics of the different AI models and how they’ll affect specific business processes.

At this moment in time, users can’t always trust the output of generative AI because it’s producing predictions based on the inputs it’s learning from rather than true facts, Fisher said.

Fisher said generative AI tools like ChatGPT are great for content ideation – draft press releases, talking points and graphic creation, etc. But there are also uses for discriminative AI, an older form of machine learning that can be beneficial for strategy development, optimizing workflows and measurement. Fisher gave the example of using tools such as Google’s Duplex to help a CEO decide on what to talk about at an event by analyzing thousands of media and social media posts from the previous year’s conference to understand discussions and perceptions of different topics.

While these tools offer great potential, none of them are able to effectively perform the tasks of a PR or communications professional on their own. Fisher referred to AI as a “creative sidekick” or “creative sparring partner.”

When developing a workflow, Fisher emphasized the importance of factoring in “human-in-the-loop” approaches to ensure full leveraging of the platforms.

Fisher urged the audience to experiment with these current technologies and products but stressed the importance of continuing education. A discussion about the topic will probably look “dramatically different than a talk that any of us might give next year or even next month,” she said. So, advised to turn to trusted partners and subject matter experts, when possible, to make sure workflow frameworks are proactive, intentional and designed with care.

You can watch the full video below.

Casey Weldon is a reporter for PR Daily. Follow him on LinkedIn.

The post How to stop worrying and smartly embrace AI appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-presents-risks-opportunities-for-pr-pros/feed/ 0
How AI helped PwC’s Megan DiSciullo make her communication process more efficient https://www.prdaily.com/how-ai-helped-pwcs-megan-disciullo-make-her-communication-process-more-efficient/ https://www.prdaily.com/how-ai-helped-pwcs-megan-disciullo-make-her-communication-process-more-efficient/#respond Wed, 28 Aug 2024 10:00:22 +0000 https://www.prdaily.com/?p=344196 DiSciullo shares her thoughts on making the AI boom work for the comms function. The rise of generative AI has spurred major discussions and reconsiderations about its capabilities and role in comms workflows. It can help comms pros draft copy, take on rote entry tasks, ideate on content, and much more. In our latest edition […]

The post How AI helped PwC’s Megan DiSciullo make her communication process more efficient appeared first on PR Daily.

]]>
DiSciullo shares her thoughts on making the AI boom work for the comms function.

The rise of generative AI has spurred major discussions and reconsiderations about its capabilities and role in comms workflows. It can help comms pros draft copy, take on rote entry tasks, ideate on content, and much more.

In our latest edition of our new “How AI Helped Me” series, we spoke with Megan DiSciullo,  senior managing director of US and Mexico communications at PWC about how she came to use AI, its impacts on her work, and more.

Sean Devlin: Could you tell us a little about how you first started interacting with AI in your role at PwC, and how it’s evolved?

Megan DiSciullo: I had exposure to AI here and there, but my journey with AI took off when PwC US made its $1 billion investment in AI last April. My team and I initially explored GenAI for routine tasks like drafting and proofreading, quickly realizing the potential AI tools had to revolutionize our communications strategy and day-to-day operations.

Over time, GenAI has enabled us to deliver personalized and impactful communications at scale, from planning and content generation to rigorous self-review. It’s helping us connect with our audiences in more meaningful ways.

When you first started using AI, how did you educate yourself on how to use it?

MDS: I think the best way to learn AI is by using it, so I started by experimenting with our different AI tools and testing different prompts with my real work, to see how AI could fit into my existing workflows.

Beyond hands-on learning, I am constantly learning from other leaders, specialists in the industry, and members of my team about what works well, what doesn’t, and new use cases they’ve uncovered.

One key aspect of our approach to AI at PwC has been upskilling our entire team, making sure that everyone understands not just how to use AI, but how to do so responsibly. This ongoing learning journey has been critical in helping all of us increase the benefits of AI, while also decreasing potential risks.

How exactly does AI factor into your role at PwC?

MDS: AI plays a central role in my work at PwC, especially as we continue to innovate our communications strategies.

We’ve co-developed a suite of GenAI applications in-house that are tailored to the specific needs of our communicators and have integrated them seamlessly within our existing tech stack. These tools enable us to manage the entire communications lifecycle; from strategy formulation with our GenAI project planner to content creation to a reviewer tool.

By integrating AI into the different steps of the process, we can produce personalized, quality communications more efficiently, allowing us to focus more time on strategic needs.

Have you seen any changes to your workflow or customer/stakeholder satisfaction since you’ve begun using AI and automation?

MDS: Definitely! I’ve seen improvements in efficiency, creativity, and quality. It’s also helped to streamline many of our processes, like reducing the time needed to create, review, and scale content.

For example, our content generator tool creates initial drafts, giving us a place to work from, which allows us to produce content faster and with greater consistency.

What’s something about AI that you think communicators need to be talking about but aren’t discussing enough?

MDS: As communicators, we often work with sensitive and non-public information. It’s therefore critical that we use secure AI tools instead of open-source tools to protect company data and client confidentiality. At PwC, we’ve created many secure AI tools that allow us to produce communications for our stakeholders, while also safeguarding confidential data and information.

If you are on a communications team that isn’t leveraging secure AI tools yet, I encourage you to advocate for them to mitigate reputational and business risk. All of this is tied to responsible AI use too, which is so important!

Do you have a big prediction for AI usage in the next few years?

MDS: I believe that AI will likely become an even more integral part of the communications function, with the potential to evolve from a support tool to a co-creator. As AI continues to advance, I could see it taking on a more proactive role in strategy development, suggesting communication approaches based on predictive analytics and real-time data.

I also think the demand for transparency in AI will continue to grow, leading to the development of systems that not only execute tasks but also provide insights into their decision-making processes.

This shift will help empower communicators to leverage AI more effectively while maintaining the trust and integrity that are foundational to our profession.

To learn more about the practical uses of AI in comms, register for our AI for Communicators Virtual Conference, which takes place on September 19.

The post How AI helped PwC’s Megan DiSciullo make her communication process more efficient appeared first on PR Daily.

]]>
https://www.prdaily.com/how-ai-helped-pwcs-megan-disciullo-make-her-communication-process-more-efficient/feed/ 0
AI news for communicators: What’s new and notable https://www.prdaily.com/ai-news-for-communicators-whats-new-and-notable-2/ https://www.prdaily.com/ai-news-for-communicators-whats-new-and-notable-2/#respond Wed, 21 Aug 2024 10:15:45 +0000 https://www.prdaily.com/?p=344121 What you need to know about the latest research and developments on AI risk and regulation. Last week on “The Daily Show,” Mark Cuban suggested that the AI race is ultimately a matter of power, saying that “ nothing will give you more power than military and AI.” British Historian Lord Acton would have offered […]

The post AI news for communicators: What’s new and notable appeared first on PR Daily.

]]>
What you need to know about the latest research and developments on AI risk and regulation.

Last week on “The Daily Show,” Mark Cuban suggested that the AI race is ultimately a matter of power, saying that “ nothing will give you more power than military and AI.”

British Historian Lord Acton would have offered a fitting response with his famous maxim, “Absolute power corrupts absolutely. ” And as communicators continue to see the battle between private company lobbying efforts, state regulation, and federal regulation play out in real-time, it’s hard to argue with Cuban’s sentiment. 

In notable news for communicators, a controversial California AI regulation bill moves toward a vote at the end of the month, the Democratic National Convention takes over Chicago amid an influx of deepfakes attempting to sway voter sentiment about the 2024 presidential election.

Here’s what communicators need to know about AI this week.

Risks 

With the DNC hitting Chicago this week, coverage is fixated on the surrogates, speeches and memorable moments leading up to Vice President Kamala Harris’ formal acceptance of the presidential nomination Thursday. 

While the November elections will bring about many historic firsts, the widespread applications of deepfake technology to misrepresent candidates and positions is also unprecedented. 

On Monday, Microsoft hosted a luncheon at Chicago’s Drake Hotel to train people on detecting deceptive AI content and using tools that can help deepfakes as AI-manipulated media becomes more widespread.

The Chicago Sun-Times reports:

“This is a global challenge and opportunity,” says Ginny Badanes, general manager of Microsoft’s Democracy Forward Program. “While we’re, of course, thinking a lot about the U.S. election because it’s right in front of us, and it’s obviously hugely consequential, it’s important to look back at the big elections that have happened.”

Badanes says one of the most troubling political deepfake attacks worldwide happened in October in Slovakia just two days before the election for a seat in parliament in the central European country. AI technology was used to create a fake recording of a top political candidate bragging about rigging the election. It went viral. And the candidate lost by a slim margin.

In a report this month, Microsoft warned that figures in Russia were “targeting the U.S. election with distinctive video forgeries.”

These myriad examples highlight a troubling pattern of bad actors attempting to drive voter behavior. This plays out as an AI-assisted evolution of the microtargeting campaign that weaponized the psychographic profiles of Facebook users to flood their feeds with disinformation ahead of the 2016 election.

Once again, the bad actors are both foreign and domestic. Trump falsely implied that Taylor Swift endorsed him this week by posting fake images of Swift and her fans in pro-Trump garb. Last week, Elon Musk released image generation capabilities on Grok, his AI chatbot on X, which allows users to generate AI images with little filters or guidelines. As Rolling Stone reports, it didn’t go well

This may get worse before it gets better, which could explain why The Verge reports that the San Francisco City Attorney’s office is suing 16 of the most popular “AI undressing” websites that do exactly what it sounds like they do.

It may also explain why the world of finance is starting to recognize how risky of an investment AI is in its currently unregulated state.

Marketplace reports that the Eurekahedge AI Hedge fund has lagged in the S&P 500, “proving that the machines aren’t learning from their investing mistakes.”

Meanwhile, a new report from LLM evaluation platform Arize found that one in five Fortune 500 companies now mention generative AI or LLMs in their annual reports. Among them, researchers found a 473.5% increase in the number of companies that framed AI as a risk factor since 2022.

What could a benchmark for AI risk evaluation look like? Bo Li, an associate professor at the University of Chicago, has led a group of colleagues across several universities to develop a taxonomy of AI risks and a benchmark for evaluating which LLMs break the rules most.

Li and the team analyzed government AI regulations and guidelines in the U.S., China and the EU alongside the usage policies of 16 major AI companies. 

WIRED reports:

Understanding the risk landscape, as well as the pros and cons of specific models, may become increasingly important for companies looking to deploy AI in certain markets or for certain use cases. A company looking to use a LLM for customer service, for instance, might care more about a model’s propensity to produce offensive language when provoked than how capable it is of designing a nuclear device.

Bo says the analysis also reveals some interesting issues with how AI is being developed and regulated. For instance, the researchers found government rules to be less comprehensive than companies’ policies overall, suggesting that there is room for regulations to be tightened.

The analysis also suggests that some companies could do more to ensure their models are safe. “If you test some models against a company’s own policies, they are not necessarily compliant,” Bo says. “This means there is a lot of room for them to improve.”

This conclusion underscores the impact that corporate communicators can make on shaping internal AI policies and defining responsible use cases. You are the glue that can hold your organization’s AI efforts together as they scale. 

Much like a crisis plan has stakeholders across business functions, your internal AI strategy should start with a task force that engages heads across departments and functions to ensure every leader is communicating guidelines, procedures and use cases from the same playbook– while serving as your eyes and ears to identify emerging risks. 

Regulation

Last Thursday, the California State Assembly’s Appropriations Committee voted to endorse an amended version of a bill that would require companies to test the safety of their AI tech before releasing anything to the public. Bill S.B. 1047 would let the state’s attorney general sue companies if their AI caused harm, including deaths or mass property damage. A formal vote is expected by the end of the month.

Unsurprisingly, the tech industry is fiercely debating the details of the bill.

The New York Times reports:

Senator Scott Wiener, the author of the bill, made several concessions in an effort to appease tech industry critics like OpenAI, Meta and Google. The changes also reflect some suggestions made by another prominent start-up, Anthropic.

The bill would no longer create a new agency for A.I. safety, instead shifting regulatory duties to the existing California Government Operations Agency. And companies would be liable for violating the law only if their technologies caused real harm or imminent dangers to public safety. Previously, the bill allowed for companies to be punished for failing to adhere to safety regulations even if no harm had yet occurred.

“The new amendments reflect months of constructive dialogue with industry, start-up and academic stakeholders,” said Dan Hendrycks, a founder of the nonprofit Center for A.I. Safety in San Francisco, which helped write the bill.

A Google spokesperson said the company’s previous concerns “still stand.” Anthropic said it was still reviewing the changes. OpenAI and Meta declined to comment on the amended bill.

Mr. Wiener said in a statement on Thursday that “we can advance both innovation and safety; the two are not mutually exclusive.” He said he believed the amendments addressed many of the tech industry’s concerns.

Late last week, California Congresswoman Nancy Pelosi issued a statement sharing her concerns about the bill. Pelosi cited Biden’s AI efforts and warned against stifling innovation. 

“The view of many of us in Congress is that SB 1047 is well-intentioned but ill-informed,” Pelosi said.  

Pelosi cited the work of top AI researchers and thought leaders decrying the bill, but offers little in the realm of next steps for the advancement of federal regulation. 

In response, California senator and bill sponsor Scott Wiener, disagreed with Pelosi. 

“The bill requires only the largest AI developers to do what each and every one of them has repeatedly committed to do: Perform basic safety testing on massively powerful AI models,” Wiener added.

This disconnect highlights the frustrating push and pull between those who warn against taking an accelerationist mentality with AI and those who publicly cite the stifling of innovation -–a key talking point of those doing AI policy and lobbying work on behalf of big tech. 

It also speaks to the limits of thought leadership. Consider the op-ed published last month by Amazon SVP of Global Public Policy and General Counsel David Zapolsky that calls for an alignment on a global responsible AI policy. The piece emphasizes Amazon’s willingness to collaborate with the government on “voluntary commitments,” emphasizes the company’s research and deployment of responsible use safeguards in its products and convincingly positions Amazon as the stewards of responsible AI reform.

While this piece does a fantastic job positioning Amazon as an industry leader, it also doesn’t mention federal regulation once. The idea of private-public collaboration being a sufficient substitute for formal regulation surfaces indirectly through multiple mentions of collaboration, though, setting a precedent for the recent AI lobbyist influx on The Hill. 

The number of lobbyists hired to lobby the White House on AI-related issues grew from 323 in the first quarter to 931 by the fourth quarter,” reminds Public Citizen. 

As more companies stand up their philosophies on responsible AI use at the expense of government oversight, it’s crucial to understand what daylight exists between your company’s external claims about the efficacy of its responsible AI efforts and how those efforts are playing out on the inside.

If you’re at an organization large enough to have public affairs or public policy colleagues in the fold, this is a reminder that aligning your public affairs and corp comms efforts with your internal efforts is a crucial step to mitigating risk. 

Those who are truly able to regulate their deployment and use cases internally will be able to explain how and source guidelines for ethical use cases, continued learning and so much more. True thought leadership will not take the form of product promotion, but showing the work through actions and results.  

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

The post AI news for communicators: What’s new and notable appeared first on PR Daily.

]]>
https://www.prdaily.com/ai-news-for-communicators-whats-new-and-notable-2/feed/ 0
How AI helps Lake County, Florida’s Levar Cooper research, plan and break down barriers https://www.prdaily.com/how-ai-helps-lake-county-floridas-levar-cooper-research-plan-and-break-down-barriers/ https://www.prdaily.com/how-ai-helps-lake-county-floridas-levar-cooper-research-plan-and-break-down-barriers/#respond Wed, 14 Aug 2024 09:00:19 +0000 https://www.prdaily.com/?p=344024 Creating an AI policy and working group has helped this county look to the future. Implementing an AI program can be daunting for any organization. But add in the constraints of working for a small county government and it can feel nearly impossible to keep up with the wave of new technology. But Levar Cooper, […]

The post How AI helps Lake County, Florida’s Levar Cooper research, plan and break down barriers appeared first on PR Daily.

]]>
Creating an AI policy and working group has helped this county look to the future.

Implementing an AI program can be daunting for any organization. But add in the constraints of working for a small county government and it can feel nearly impossible to keep up with the wave of new technology.

But Levar Cooper, director of communications for Lake County, Florida, has led the charge for not only the communications department but the entire county government to incorporate AI into its workflow.

Using his background as a web developer combined with his skills as a communicator, Cooper has helped his county implement create an AI working group and start using these tools that will define the workplace for the foreseeable future.

Here’s how he does it.

Answers have been edited for brevity and clarity.

How did you start your AI working group?

As we were using it across teams, we were looking at the things that we could do going forward, and when using it, we got a chance to kind of see what the risks were, what it’s good at.

On an organizational level, we didn’t have a policy. So thankfully, being a member of the executive management team, I was able to bring that to the team. And I think when you’re having that conversation, they’re more focused on business, the numbers, the day-to-day operations, and you’re talking about AI, which still sounds a lot like sci-fi. So, it took working with IT, because they’re going to have a pivotal part in this, because AI isn’t just the chat bots in the ChatGPT, there’s so much more that it can do, but a policy is necessary.

So, we worked with (IT’s) consultant. And funny thing is, I used a lot of those materials along with what I understand about the organization to kind of align things, even to the point where I uploaded some of those materials to ChatGPT. Gave it some additional context, added the strategic goals and let it work as our consultant of sorts.

 

 

Did you train a special GPT for that?

I did. Essentially, it was taking those materials, those best practices, along with some of the presentations that I developed for the executive team outlining the risks, outlining what an implementation would look like for Lake County. But then also making sure that the GPT was equipped with the organizational goals, like the mission and where it’s trying to go and some of the organizational makeup. And I let that be the consultant. Now, was it perfect? Absolutely not. But it did make something attainable that probably would have taken several months – and it still took a couple of months – but you see the thought that went into everything.

How did you go about training your team, including your executive team?

I try to make things accessible. One of the things I see in this space is people really want to be super technical. And let’s use all the terms: neural networks, and large language models and all. That’s fine. I want to understand it. It’s important that I understand it. But for the person who’s just on the other side trying to use it, and for the executive team who’s trying to understand what this thing is and how we can use it, it’s really not beneficial.

You mentioned that there were objections. Can you share with me some of those objections and how you overcame them?

I would call it more resistance … maybe hesitation in terms of wanting to adopt the technology.

I’m in government. If you’re in Amazon, and you say, ‘Hey, we want to do AI,’ and everyone’s excited. In government, it’s like, ‘Wait, wait, wait. We have to make sure there’s a policy in place.’ In one conversation, it was, ‘Until we get things together, can we just stop people from using AI?’ And I don’t think that’s the right way, either.

So, what we’re really encouraging is sort of an adaptive adoption of AI. We don’t have to revolutionize everything today. Let’s find some use cases. Let’s build some policy around those use cases, and as we continue to expand our use of it, we can further expand the policy based on really thoughtful and careful implementation, and then also working with different technology as it advances.

Tell me some of the things that you’re most excited using AI for, specifically in your comms department.

Our web services department, they’re using it for code generation. And one of the things that we are also mindful of is what we’re giving to ChatGPT. This is still a public platform, so nothing sensitive, nothing that would put us at risk or anything like that. But functions and things like that, it’s really good for. And then (Microsoft) Visual Studio has some built-in tools that will review code.

We have a very small team. Having that aspect to improve the quality of what we’re doing is helpful for creative services for ideation, just coming up with ideas, giving it brand information and then coming up with some concepts to kind of get the creative juices flowing. We also use it for image editing — if you have a vertical image and you need to make it a horizontal image, it can fill in some of that background space. We don’t use it for actually creating images, but for the communications side of things and content, it’s been amazing for us.

Obviously drafting content, and I emphasize draft, because it’s not ready to go. But it’s almost like getting a promotion, in the sense that, I’m not just a writer. Now I can go right to editor. I still have to do all that. I gotta get the talking points. I gotta get all the facts, understand who my audience is, but I can give all of that to ChatGPT and it gives me a draft and I can work from there versus starting from a blank page. That can be helpful for news releases. That could be helpful for even speechwriting, although there are some concerns around that.

Another reason why I believe communicators should be involved with this transformation is the socio-technical risks. Yes, the technology works, but what does it actually mean to your customer? If you wrote a speech after a tragic event, and it was this heartfelt thing, and it hit all the points, and someone runs that through an AI checker, and they find out that it’s written by AI –  what does that mean? How do the public feel about that? And I would bet they’re not going to like it very much.

Other than implementation, the thing that’s been surprising that I don’t hear as many people talking about is the research and the planning. It’s been tremendous for that. ChatGPT has a custom GPT that’s partnered with Dimensions, which is all this research, that’s another thing that we use. We use it for stakeholder meetings. We record meetings, we take the transcripts, put them in ChatGPT and give summaries, sometimes outlining requirements. Quantitative data analysis, and especially qualitative data analysis, which can be really resource intensive.

What would you say to communicators who maybe are earlier in their AI journey than you? What advice would you give to them?

I would say continue to go forward if you haven’t. For anyone who hasn’t used it yet, my advice is to go to ChatGPT, because it’s really accessible. Type “hello,” and then, and then go from there. I really would say that’s a starting point, as opposed to a training. Because when you’re doing a LinkedIn learning or something like that and you don’t have the context, it’s like they’re speaking another language. For those who are further along, I go back to having those conversations. If you know someone else who is working in your space, someone who’s doing similar activities with AI, ask, ‘Hey, how are you using it?’ Because it’s something that we’re all going to be learning about for years to come, because it’s continually changing.

The post How AI helps Lake County, Florida’s Levar Cooper research, plan and break down barriers appeared first on PR Daily.

]]>
https://www.prdaily.com/how-ai-helps-lake-county-floridas-levar-cooper-research-plan-and-break-down-barriers/feed/ 0