High-Quality AI Data to Power Innovation | LXT https://www.lxt.ai/ Mon, 26 Aug 2024 15:03:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.lxt.ai/wp-content/uploads/2022/02/favicon.png High-Quality AI Data to Power Innovation | LXT https://www.lxt.ai/ 32 32 Human-in-the-Loop: how human expertise enhances generative AI https://www.lxt.ai/blog/human-in-the-loop-generative-ai/ Mon, 26 Aug 2024 15:03:40 +0000 https://www.lxt.ai/?p=1756 Human-in-the-loop: enhancing generative AI training with human expertise Generative AI continues to evolve at a rapid clip. Innovations like Sora from OpenAI that can take short text descriptions and generate high-definition video clips are rapidly closing the gap between AI-generated and human-generated media, making it difficult to distinguish the source of the content. But while the advances we are witnessing

The post Human-in-the-Loop: how human expertise enhances generative AI appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
Human-in-the-loop: enhancing generative AI training with human expertise

Generative AI continues to evolve at a rapid clip. Innovations like Sora from OpenAI that can take short text descriptions and generate high-definition video clips are rapidly closing the gap between AI-generated and human-generated media, making it difficult to distinguish the source of the content. But while the advances we are witnessing are impressive, the content they generate can still contain errors and biases that only humans can detect. 

What is human-in-the-Loop?

Human-in-the-loop (HITL) in machine learning is an approach that involves humans providing feedback to an AI system to improve its performance over time by correcting errors and identifying biases generated by the AI. While generative AI models have improved significantly over time, HITL has become increasingly important to ensure that applications built on these models are accurate, ethical and trustworthy.

An overview of human-in-the-loop in generative AI

There are several components involved in HITL in AI that help to improve its performance and quality:

  • Data annotation: The initial stage of training AI models is one area where humans play a key role by labeling data sets. The machine learning model will use this data to learn and make predictions. After the model is trained with the annotated data, humans review and correct the output of the AI, and their feedback is used to improve its accuracy. Establishing a continuous feedback loop helps the model to learn over time and enhance its predictive capabilities.
  • Model training and fine-tuning: In a supervised learning approach, human-generated data is used as ground truth to train the machine learning models. The model will improve its performance by comparing its predictions against human-generated data. 
  • Error correction and bias mitigation: After the AI makes predictions, humans evaluate the results, and corrections are added to the model to improve it. Humans can also correct biases by pinpointing where the model might be creating unfair outcomes.
  • Continuous learning: With many AI applications, the conditions and trends change over time which requires new data inputs to ensure accuracy and relevance. Human feedback plays an important role here as well.

The importance of human-in-the-loop for generative AI

Many significant advancements have been made in the field of generative AI, but gaps still exist with the technology that highlight the importance of humans in creating more accurate and inclusive AI. Humans play an important role in:

  • Improving accuracy: humans are crucial to helping reduce errors in generative AI and ensuring that its outputs align with real-world expectations.
  • Ethical deployments: humans have the ability to ensure that bias is mitigated through reviewing the data that is used to train the AI, allowing for more inclusive and ethical outcomes. 
  • Dealing with ambiguity and understanding context: humans provide the cultural and ethical context to ensure that the generative AI creates more appropriate outputs.
  • Model customization and personalization: when generative AI is developed specific to an industry or domain, domain expertise from humans allows organizations to create more tailored solutions for their customers. Human feedback can also help to tailor models to reflect end user preferences, making them more personalized.

And as mentioned earlier, human feedback is indispensable in the process of continuous learning so that models adjust to changing trends over time.

Applications of human-in-the-loop in generative AI

HITL is used across a range of generative AI use cases to improve the quality of its output, including:

  • Text: editors review and correct text generated by generative AI models like GPT to make sure that the content is accurate and appropriate for the target audience.
  • Image and video: human review of AI-generated images and videos ensures they align with an organization’s brand identity and target audience while also identifying potential harmful content such as deepfakes.
  • Conversational AI: a human-in-the loop approach for conversational AI involves using humans to review and adjust response to user queries in chatbots or virtual assistants for accuracy, completeness, relevance and accuracy.
  • Text-to-speech: AI-generated speech can be reviewed by humans to adapt it so that it sounds more natural and human-like, improving the user experience for customer service bots, virtual assistants and more.

Scaling your human-in-the-loop program

While scaling an HITL program internally can be resource-intensive, partnering with an organization with this expertise can significantly accelerate generative AI deployments. The benefits of this type of partnership include the ability to generate high-quality training data quickly, resource and cost optimization, and access to specialized expertise. You can learn more and see examples of how LXT is helping enterprises deploy reliable generative AI in this blog post here.

The post Human-in-the-Loop: how human expertise enhances generative AI appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
How enterprises are accelerating successful generative AI deployments https://www.lxt.ai/blog/how-enterprises-are-accelerating-successful-generative-ai-deployments/ Tue, 30 Jul 2024 14:00:00 +0000 https://www.lxt.ai/?p=1733 Earlier this year, we published our annual research report on AI maturity in the enterprise which included trends on generative AI deployment. 69% of respondents stated that generative AI is more important to their organizations than other AI initiatives, including 11% claiming that generative AI is much more important for their overall AI strategy.  But it’s still early days in

The post How enterprises are accelerating successful generative AI deployments appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
Earlier this year, we published our annual research report on AI maturity in the enterprise which included trends on generative AI deployment. 69% of respondents stated that generative AI is more important to their organizations than other AI initiatives, including 11% claiming that generative AI is much more important for their overall AI strategy. 

But it’s still early days in the deployment of generative AI and companies are experiencing several bottlenecks including security and privacy concerns, accuracy of the output, availability of high-quality training data and fine-tuning the foundation model as shown in the chart below.

Benefits of working with the right training data partner to accelerate generative AI deployments

Companies eager to capitalize on the efficiencies that generative AI delivers can accelerate their deployments and address many of these bottlenecks by working with an experienced AI data partner. This has many benefits including:

  • Generating high-quality training data 

Organizations focused on building responsible and reliable AI should train their models with high-quality, diverse datasets to improve model accuracy and inclusivity. Experienced data partners implement thorough quality control processes to optimize data integrity and reliability for their customers. They also have access to diverse groups of contributors so that the datasets they create are representative of a wide range of potential users. These contributors can also evaluate the accuracy of model outputs and provide crucial feedback for model retraining.

  • Resource and cost optimization

Outsourcing data collection and preparation allows organizations to allocate their internal resources more efficiently and focus on core activities such as product development. While companies can try to collect and label data on their own, this presents challenges that an experienced partner is equipped to handle including access to large and diverse contributor groups, and the capacity to collect and annotate high volumes of data quickly. 

  • Access to specialized expertise

Developing accurate generative AI in the realm of natural language processing requires linguistic expertise that many companies do not have access to within their organizations. Working with experienced linguists ensures that the AI can respond to human language in a nuanced and accurate manner. Some key areas of linguistic expertise include syntax and grammar, semantics, speech synthesis, part-of-speech tagging and named entity recognition. Further, a data partner can help with foundational model fine tuning to guide the model to produce the optimal output for a task based on a specific domain or context, and with prompt tuning to optimize a set of prompts that guide the model’s outputs for specific tasks. 

How LXT is helping enterprise companies create reliable generative AI

Organizations leading the charge to deploy generative AI are harnessing partnerships to accelerate and scale their deployments. LXT is helping companies on several fronts including:

  • Prompt and response creation: we’ve helped multiple companies develop high-quality inputs and outputs across several domains to enhance the quality and relevance of the generative AI application.
  • Prompt rating and ranking: we’ve also helped various organizations assess the effectiveness of prompts based on specific criteria and rank them, which also helps to improve AI accuracy.
  • Instruction fine-tuning: we’ve gathered diverse sets of input-output pairs where the input includes a clear instruction and the output is the expected response, helping to make the AI more adept at understanding and executing tasks based on given instructions.

To learn more about LXT’s generative AI capabilities and how we can help you build more accurate and inclusive generative AI, visit our services page here.

The post How enterprises are accelerating successful generative AI deployments appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
Building a foundation for success with AI: Practical advice from AI leaders https://www.lxt.ai/blog/building-a-foundation-of-success-with-ai-practical-advice-from-ai-leaders/ Mon, 08 Jul 2024 14:34:43 +0000 https://www.lxt.ai/?p=1719 With AI on the minds of business leaders around the world, organizations are racing to implement the technology and reap the benefits. This year as part of our Path to AI Maturity research, we found that 72% of companies consider themselves to be at the higher levels of AI maturity as defined by the Gartner AI Maturity Model. As part

The post Building a foundation for success with AI: Practical advice from AI leaders appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
With AI on the minds of business leaders around the world, organizations are racing to implement the technology and reap the benefits. This year as part of our Path to AI Maturity research, we found that 72% of companies consider themselves to be at the higher levels of AI maturity as defined by the Gartner AI Maturity Model. As part of our research, we asked survey participants to share one piece of advice that can help others succeed with AI. The responses ranged from how to set an AI strategy, to ethics, to data considerations. 

Setting the strategy

With increasing pressure to drive growth while managing costs, AI can often be seen as a magic wand that can solve many business challenges. However, AI leaders recommend a more methodical approach to deploying AI. One survey respondent had this advice to share: 

“Understand the problem you are looking to solve. Many people try to solve something that is too large and unachievable. Set realistic goals and work towards them”. 

By successfully deploying low-hanging fruit use cases, the learnings can then be applied to tackle larger ones. Another AI practitioner explained, “I would suggest starting with limited scope pilots focused on a single use case rather than trying to deploy AI across the entire organization all at once.” Look for processes that are working well and can be further optimized with AI.

Deploying responsible AI

Another participant in our research study focused their advice on the ethical side of AI with this guidance:

“Develop a clear and comprehensive ethical framework for AI deployment. This framework should address issues such as fairness, transparency, accountability and impact of AI on various stakeholders.” 

An important point is that this ethical framework should be developed early in the strategy formulation process and that these guidelines are established before deploying or scaling any AI solutions. The ethical framework should be a guiding principle in strategic decisions and AI deployments.

There are multiple components to the development of a strong ethical AI framework:

  • Creating an AI ethics committee: this should be a group of diverse stakeholders who can bring a variety of perspectives to the conversation. This committee is responsible for setting the ethical guidelines, reviewing practices and handling ethical challenges that may arise.
  • Applying the ethical framework: AI product and project managers are responsible for applying the guidelines in the day-to-day development and deployment of AI technologies throughout the product lifecycle.
  • Ensuring compliance: Legal and Compliance departments ensure that the AI ethical framework complies with international, national and local regulations. This is an ongoing process that should be reviewed regularly in response to new technology developments, legal and compliance requirements, and practical experiences from AI implementation.

In the end, deploying responsible, ethical AI is key to building customer trust, and needs to have accountability across the organization.

The importance of data for successful AI

In a recent episode of our Speaking of AI podcast, guest and thought leader Jeff Winter stressed the importance of a data strategy to succeed with AI:

“Companies should have a clear data strategy before diving into AI…this means investing in the process of cleaning and organizing their data, because no matter how advanced the AI is, it can’t work its magic with messy data…The moment someone says that they have an AI strategy I go, great, can I see your data strategy?”

Several respondents in our survey shared a similar view, providing specific examples of what to think about in building this strategy: 

  • “Make sure the data you have is reliable, clear, and reflects the issue you are attempting to solve.”
  • “Implement robust data governance policies to maintain data integrity, security and compliance. Establish protocols for data access, storage, sharing, and updating to ensure consistency and reliability.”

Another participant highlighted the connection between data and ethics: “Recognize and correct biases in the data that may affect AI results; this calls for thorough study, algorithms for detecting prejudice, and ethical considerations.” 

Once the initial data pipeline is established, training datasets should be monitored and evaluated regularly for a range of factors including quality, data drift, relevance, privacy, ethical considerations and bias.

Putting it all together

In reviewing the collective advice shared by AI practitioners in the enterprise, setting a strong foundation for an AI program requires a holistic approach including the identification of realistic goals for where AI can be applied, a thorough data strategy to support the AI program, and an ethical framework to ensure that AI is deployed responsibly. For more insights into how enterprise organizations are deploying AI today, download our Path to AI Maturity 2024 report here.

The post Building a foundation for success with AI: Practical advice from AI leaders appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
New Report: ROI of Training Data 2024 https://www.lxt.ai/blog/new-report-roi-of-training-data-2024/ Tue, 02 Jul 2024 16:43:47 +0000 https://www.lxt.ai/?p=1716 Earlier this year, we released the Path to AI Maturity 2024, our research report detailing the findings from our third annual study on the state of artificial intelligence (AI) maturity in the enterprise. This year, 72% of U.S.-based organizations state they have reached the three highest levels of AI maturity as defined by Garther’s AI maturity model.

The post New Report: ROI of Training Data 2024 appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
Earlier this year, we released the Path to AI Maturity 2024, our research report detailing the findings from our third annual study on the state of artificial intelligence (AI) maturity in the enterprise. This year, 72% of U.S.-based organizations state they have reached the three highest levels of AI maturity as defined by Garther’s AI maturity model. Companies are rapidly moving out of the experimentation stage to the production phase where they are generating a positive return on their investments.

Risk management is currently the lead driver of AI strategies in the enterprise, but organizations also see AI as a means to being more agile, driving differentiation, and managing their operational performance, among other objectives.

The types of AI applications deployed in the enterprise vary from search engines to speech and voice recognition to computer vision, demonstrating the wide applicability of the technology.

These and many other insights can be found in our Path to AI Maturity 2024 report.

In our new report on the ROI of training data, you’ll learn how enterprise organizations evaluate their training data investments, and how this varies by stage of AI maturity. You’ll discover how training data is sourced and the various roles that data service providers play in building AI data pipelines.

The survey included responses from 322 senior decision-makers at US organizations with at least $100 million in annual revenue and more than 500 employees. More than half of respondents were from the C-Suite and all those who took part had verified AI experience. To review the full findings, download The ROI of High-Quality AI Training Data 2024 here.

The post New Report: ROI of Training Data 2024 appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
LXT podcast – episode 11: Karin Golde – Founder of West Valley AI https://www.lxt.ai/blog/lxt-podcast-episode-11-karin-golde-founder-of-west-valley-ai/ Wed, 29 May 2024 15:27:34 +0000 https://www.lxt.ai/?p=1707 In the 11th episode of Speaking of AI, LXT’s Phil Hall interviews Karin Golde, founder of West Valley AI. Phil describes Karin as living at the intersection of AI, data, human AI alignment, and product strategy. Join them as they dive into what this means, connect over their shared background as linguists, and discuss leadership development, bias in AI and misconceptions about AI hallucinations.

The post LXT podcast – episode 11: Karin Golde – Founder of West Valley AI appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
In the 11th episode of Speaking of AI, LXT’s Phil Hall interviews Karin Golde, founder of West Valley AI. Phil describes Karin as living at the intersection of AI, data, human AI alignment, and product strategy. Join them as they dive into what this means, connect over their shared background as linguists, and discuss leadership development, bias in AI and misconceptions about AI hallucinations.

Introducing linguist, leader and founder of West Valley AI, Karin Golde

PHIL:

Today’s guest on Speaking of AI is someone who resides in the Bay Area, but she lives at the intersection of AI, human AI alignment and product strategy. She has a PhD in linguistics. She’s held senior leadership roles at NetBase, Quid, Clara Analytics and Amazon Web Services. And she’s the founder of West Valley AI. Please welcome Karin Golde. Hi Karin.

KARIN:

Hi, thanks Phil, it’s nice to be here.

What does it mean to live at the intersection of AI data, human AI alignment and product strategy?

PHIL:

Great. So from your perspective, what does it mean to live at the intersection of AI data, human AI alignment and product strategy?

KARIN:

Well, what I’ll say about that, let’s see. So living at the intersection of AI data and strategy and product strategy, it all comes back to kind of a human-centric approach. Language is a human construct. We sometimes think of it as just black-and-white marks on a page, just some text that’s living independent of us. But really it’s all about how do we use language to achieve our goals? And so to do that, we really have to think holistically about how do we approach language from a technical perspective, but also from a usability perspective, and an ethical perspective as well.

How has the role of the linguist changed over time?

PHIL:

In 1998, Fred Jelinek reportedly said, every time I fire a linguist, the performance of the speech recognizer goes up. Was that a fair comment back then? Has the role of the linguist changed over time? And is linguistics relevant to our industry and in particular AI today?

KARIN:

Yes, well, I will say I’m familiar with that quote and I think it comes from a place that, you know, linguists at heart, we love language and we can’t let go of that love, right? So when you see, and let me give maybe a little more context for some of your listeners too, about what he was talking about there. Back then, there were still a lot of symbolic systems in play in the AI industry, which wasn’t, which wasn’t called AI at that point. But there were systems that were trying to recreate grammar as we hypothesized that we use it as humans.

So it would be a matter of doing careful parsing of sentences, making sure that you understand the subject, verb, object, any adjective phrases, what do they describe, what are they attached to? So you get into these very complex linguistic representations of language and then you operate off of that when you’re building a system to interpret what language means. This can be a very manual process in a lot of ways, you know, building out these rules, testing them, making sure that the sentence structure is coming out the way you expect it.

And so I think people got frustrated with the slowness of linguists building those systems. And what they found was that you can take statistical approaches and just look for probabilities, just use math to pick up on the features of language and output something like a classification. What class does this text belong to, or extract information from it. Where are the names of people or places and so forth? And so I think what he was referring to there was, I can just use math to do this faster. And there’s always been a tension in the industry between those two points of view.  And I’m sympathetic to both, really.

In today’s AI landscape, is linguistics still relevant?

PHIL:

Yeah, look, me too. Full disclosure for anybody that’s watching this, I’m a linguist as well. But with that in mind and with the deep learning techniques that are prevalent today, is linguistics relevant or has it, have things moved on to a point beyond there?

KARIN:

Well, yeah, I guess there’s this basic analytical stance towards language that you get from being trained as a linguist, which I think still holds a lot of value. So it’s very subtle, but this is something that I’ve really seen in practice at the various companies I’ve worked at over the years. You can always tell from the conversations in a meeting who has linguistic training and who doesn’t when they’re looking through language data. So, we know that language is this intermediary between humans and knowledge, right? But by itself, language does not constitute knowledge. The knowledge comes from both the semantics of the text, the meaning of the individual words, how they get put together to make higher-level meanings. So that literal meaning, but also it comes from the pragmatics, you know, some additional conclusions, inferences that we make as individual humans.

So, for example, if we read a restaurant review and it says, I had to raise my voice to talk to the people at my table, right? We take that to mean the restaurant was noisy. We don’t take that to mean that the people at the table were hard of hearing or something. We assume it’s relevant to the restaurant review. And what I’ve observed is that linguists look at that and they look at the language analytically and they think, wow, you know, this is gonna be really complex to programmatically capture, or restaurant noise, because it’s not gonna just be people saying the restaurant was noisy, there’s gonna be all kinds of ways to indirectly express that. People who have not had training in linguistics sometimes will kind of skip that analytical step and think like, well, it’s obvious. You can see that the restaurant was noisy just by reading this. And they don’t see the difference between the semantics of what’s on the page versus the pragmatics of what you as a human bring to interpreting the meaning of that statement.

What advice would you give to an institution that’s designing a contemporary linguistics course?

PHIL:

Yeah. So I was recently asked to provide input into course design for a bachelor of linguistics course at Macquarie University here in Sydney. And the background to that is that in my various roles over the last 25 years, I’ve hired a lot of linguists and so they came to me as somebody who might be able to offer some suggestions on what makes a linguist employable.

So, what advice would you give to an institution that’s designing a contemporary linguistics course? And are there particular skills that you feel are must-haves and other linguistic areas that you’d suggest are maybe less relevant in this field?

KARIN:

Yeah, so I’m going to say first and foremost is data collection. And I think talking to people who are doing university courses right now, it seems like that is being addressed better, at least than when I was in school, or maybe it was just my program. But it wasn’t really an emphasis. You kind of assumed the data was there, or you had sort of armchair explanations of like, well, this sounds right, kinds of things. It wasn’t quite so much data-driven. But the data collection part itself, I think, is really fundamental to both linguistics and AI.

So as a linguist, you want to base any kind of theoretical hypotheses you have on actual data that you’ve collected and tested out on that data. It’s also really valuable, of course, to have linguistic data on more scarce phenomena like low-resource languages. So, it’s a very valuable exercise in itself to do this data collection. But how to do that ethically? How are you going to crowdsource it? Are you going to go into communities and ask for it? How are you going to compensate people for it? How do you manage the data through its life cycle? So how do you learn about data governance principles? That would be probably my number one recommendation for building linguistics courses. And like I said, I think that people who do any kind of machine learning should be thinking about that as well.

That’s also, it’s a gap over there, too. And so the more people know about data collection, the better. I think the other thing that linguists do bring that is still really valuable and should still be a core part of courses is understanding multilingual considerations. So, the industry is really dominated by English, right? And that’s partly a business thing. It’s where the money usually is when you’re first building a product, that’s the first market you want to serve as an English-speaking market. And then the rest of the languages are developed as an afterthought. And that’s unfortunate, right? From a few perspectives.

But taking, if a linguist comes in and takes a perspective that English is just one of the many ways to represent meaning, then you can help avoid developing technologies that kind of take English as the norm and everything else as the weird language. I mean, English is just as idiosyncratic as any other language. So you really want to build from the start for all languages and not just for English.

What advice do you have for people who are pivoting into a leadership role?

PHIL:

Yeah, great advice. I will actually pass that advice on to the people I’m speaking to, in fact. For my next question, I’d like to get your thoughts on the concept of leadership. So, in today’s workplace, it seems to me that a lot of weight is placed on the value of leadership skills, but that outside of, say, MBA programs or specialized training programs like military officer training and so forth. There’s not a lot of weight placed on this in educational settings. Now you’ve held some impressive leadership roles and you’ve listed ethical leadership as one of your skills.

What advice do you have for people who are strong individual contributors in an organization and are maybe approaching the point where their next career move is going to require a pivot into a leadership orientation? And of course, what from your perspective is ethical leadership?

KARIN:

Yeah, that’s a great question because I think most people just kind of get thrust into a leadership position and it’s assumed that they’ll figure it out because they’ve had managers before and so they’ve seen people do the work so now just do what you saw people doing.

And it’s really not that simple. I think the other misconception is that some people are kind of born leaders and others aren’t. But leadership skills really are skills that can be learned. And there’s no one type of leader that, you know, you have to be this kind of person to be a leader.

I think pretty much anyone who has the desire to do so can look at the skills they need to grow and say, yes, this is for me, or no, I don’t want to go in that direction. So I think first and foremost, treat it as a learning exercise. Expect to have to learn. Don’t expect it to just kind of come to you. So that would be the first thing.

What constitutes ethical leadership?

PHIL:

And regarding ethical leadership as a concept, I guess I have a take on it, but I suspect that’s idiosyncratic. And I’d love to know if you think there are a set of boundaries around what constitutes ethical leadership. I think that would be an interesting thing for people to hear.

KARIN:

Yeah, so I think as far as ethical leadership, I think the hardest part that I’ve had to deal with is a couple things. One is the issue of fairness to your team, right? So you have to think about what’s good for your team as a whole, what’s good for the company as a whole when you’re dealing with individuals. And that doesn’t mean like being unfair to people or throwing them under the bus, it just means figuring out are they contributing to the team the way you expect?

If not, where is the issue? Is it that your expectations aren’t clear? Is it that they have a gap in the skills? Can that gap be closed? All of these things have to be issues that are addressed immediately and upfront. And I think that there’s a lot of ways in which doing that right means being an ethical leader, just kind of letting problems fester. And it can have a negative effect on the rest of your team. It can be, you know, the other people can perceive it unfair if you let somebody kind of continue to underperform without addressing it. So those are kind of the management aspects of it.

And then the other way in which it can be difficult is when you don’t agree with the decisions of senior leadership, but you still need to communicate those to your team. So I think that’s where you have to really be in touch with your own value system, and understand to what extent does it overlap with the organization and where is there possible disconnects? And you have to decide whether the decisions that are being made are just ones that you disagree with because you think it would be better to do something differently. Maybe we would get more customers or it would be more fun or are the decisions you disagree with crossing an ethical boundary for you. And that’s really tricky to be aware of because once you’re aware of that, you have to make some hard decisions. Your approach can’t be just go to the team and complain about senior management. That’s not the thing that you can do. What you have to do is figure out, are you going to disagree and commit on this one? Or are you going to push back? Or are you going to even leave the company because that’s just not a good fit for you?

What challenges are you helping clients solve, where are they getting AI right, and where are they struggling?

PHIL:

Yes, there’s a lot in there that resonates for me. So now that you’ve entered the world of consulting, what challenges are you helping clients to solve?  And where are they getting AI right? Where are they struggling?

KARIN:

Yeah, well, the people I’ve been talking to, I think the biggest issue that they’re having right now is with evaluation. So, especially with the current systems that you can adopt a little bit more easily than you could previously, you could build machine learning models, but it would take a team of machine learning engineers or data scientists. And now you have more of an opportunity to use APIs to directly connect to large language models like ChatGPT and get your answers directly and have that be part of your system. And that feels really magical and it often will work really well in a demo setting or even in kind of as a beta mode. But then the question is, okay, great.

Now we have it in production because it felt good enough to release, but how do we really know it’s working? How do we know if it’s going off the rails? Those kinds of evaluation systems, I think, are a  really interesting area of development right now. There’s some very interesting platforms out there that you can use to semi-automate evaluation of responses in production. So I would say that that’s one of the biggest issues that people are facing right now.

What are key considerations when somebody’s building a data pipeline to ensure that it’s fit for purpose?

PHIL:

We’ve talked a little bit about data. In fact, you’ve emphasized the importance of data and data collection quite strongly. What are key considerations when somebody’s building a data pipeline in order to ensure that it’s fit for purpose? And do you think companies are investing enough in data?

KARIN:

Hmm. You know, I think data has gotten a bit more of a, a bit more public attention, certainly lately with people being concerned about web data being used for pre-training large language models and so forth. It’s definitely always been an issue though. What data are you going to use and how are you going to build a pipeline that goes all the way from collecting it, to training the model, to holding  out some for evaluating the model and so forth. So I’m not going to say, I think that you can always do more.

So I think to answer your question, are companies doing enough? I think they can definitely do more. And there are certainly some, I’m thinking right now about the data-centric AI movement in a way.  Right, so this was something that Andrew Ng, who is a fairly well-known researcher in AI, he coined this term or made it popular at least around 2019 or so, about five years ago.  And it’s definitely taken on in certain circles. Certain people do understand that there is this concept of you can iterate on getting better and better data and get better results that way than iterating on trying to train a better and better model using fancier algorithms and so forth.

So I think that there is a certain amount of kind of penetration into the industry of recognizing the importance there. But, you know, as more and more people kind of get into AI beyond the original sort of set of data scientists, machine learning engineers, now it’s like software developers and product managers and everybody is sort of in the game. I think we do need to continue to raise awareness about the importance.

Do you think that bias will limit the extent to which AI can reach its full potential?

PHIL:

Great. Well, that’s a good lead into my last small set of questions here. So these are all based around ideas of bias. So in terms of AI itself, do you think that bias is going to limit the extent to which AI can reach its full potential?

KARIN:

Yeah, bias is difficult because to a certain degree, bias is kind of the whole point of machine learning models. Machine learning models are very agnostic. They’re just a tool to interpret whatever data you give them. And so you do want them to learn and pick up from the patterns that they see, not to anthropomorphize, you know, that are consumed by the models. What counts as undesirable bias is a much trickier question. And we often think about, well, how might the output adversely affect protected classes? I think is one way in which we often think about bias, which is a very important consideration.

So for example, you know, if you’re building a machine learning model to screen resumes, and there was a famous example of this happening at Amazon in 2018, which they quickly corrected, but, you know, it’ll be difficult to say for any one resume, whether it was rejected or accepted based on, you know, cues to the person’s identity. What they found in the case of Amazon’s resume screening process that they were using internally was that it was discriminating against people who, who went to historically black colleges or all women’s colleges. And it wasn’t so much that it was picking up on any identity characteristics in particular, but since those people had been discriminated against in the past, it picked up on that and kind of continued and amplified that discrimination.

So if you look at the overall patterns, you can kind of start to see where the problems are and you can build test data sets that…that probe for a particular bias once you’re aware of it or once you suspect it might occur.  So you can have resumes that are identical except for certain characteristics like whether the applicant went to historically black university and you can test for bias in the system and try to correct it with additional training sets. But you really have to be aware of it and looking out for it. It’s not going to just jump out at you. And you also have to have a clear set of values about what counts as negative bias for your application.

How do we deal with the inherent bias of real-world unstructured data?

PHIL:

Okay, my next question, maybe my next question should have been before the previous question, but in terms of bias in data, my view is that real-world unstructured data is inherently biased. That doesn’t mean necessarily negatively biased, but everything’s from a perspective.

If it’s real-world data, everything has a perspective. Is this a problem or is it just a fact of life that we… Yeah, we deal with it as humans. Do we just deal with it from a technology point of view as well?

KARIN:

Yeah, it’s a really good question because, as you’re kind of saying here, we get a lot of training data from the web. And especially these days for large language models, we use news and blogs and message boards and so forth. And we often think of this as like, that’s just a mirror of the world. And so any bias that’s in there is just, it’s going to be reflected by our systems. And so there’s really not much we can do about it. But it’s really not just a mirror reflection of the bias. It’s also an amplification of that bias.

So because these models deal in probabilities, if something is probable, it becomes the norm. And it kind of it erases what other diversity there even originally was. So I saw a study recently that analyzed AI-generated images. And it looked at the actual distributions for ethnicity and gender across different types of occupations and found that the images generated really amplified the existing tendencies. So for example, all of the images of flight attendants that the AI generated were all women. But in reality, in the US, only 65 % of flight attendants are women. So it’s not even just that it repeats what we’re telling it, it really takes that to be gospel at some point.

PHIL:

Yeah, yeah, I’ve observed a parallel to that with music selection algorithms. I’m looking for things that I haven’t heard before. But it doesn’t matter, if I pick a piece of jazz music. It really doesn’t matter what genre of jazz, what period, whatever I pick within an hour or so, I’ll get Kind of Blue by Miles Davis. It doesn’t matter where I try and direct it. I know that that’s coming.

KARIN:

Yeah, and you know, speaking of, I have these problems too, because I love discovering music through algorithms, but it’s not great. And I think there’s also a bias against older music, right?

Because newer music hasn’t had a chance to really pick up a lot of features that the algorithm can learn from. And so, it kind of, like you said, keeps going back to the same old standards that it’s sure that you’re gonna like.

Within the industry, as a founder of a startup, have you faced negative gender bias? Is this a barrier that you’ve had to overcome?

PHIL:

Yeah, yeah, it’s a few like that and you like that. You’re going to love this one because everybody does. And then my final question on that topic of bias is not really a technological one. It’s just a social one. So within the industry, as a founder of a startup, have you faced negative gender bias? Is this a barrier that you’ve had to overcome?

KARIN:

It’s a really good question. So for me personally, I would say like kind of over the course of my career, I’ve probably, I’ve been in situations definitely where I get a weird vibe, right? And there’s never anything overt. So, you know, one answer is that yes, I have my suspicions, but you can never really pin it down.  And it’s kind of, it’s also, it’s like the water you’re swimming in, right?

And I actually, I was talking a while back to a trans woman who had been in the software industry working as a man for quite a while and then transitioned to being a woman and she said the way she gets treated is way different. So there are a few people out there who can really give us some A/B testing and report back. So it’s definitely out there. It’s just, it’s really hard to know if that’s the way that’s your, what your experience has always been, like whether it’s different from other people’s.

Let’s talk about AI hallucinations…

PHIL:

Great. I’ve got one more question. This is kind of putting you on the spot. We’ve gone through a range of different areas here. If you were running the interview, is there a question that I should have asked you but I didn’t? And if there is, what is it and what’s the answer?

KARIN:

I think I want to… I want to talk about hallucinations because it’s a thing that bugs me. So we kind of touched on this a little bit, the way that people kind of misinterpret language models and the way that people think of language as being inherently knowledge carrying, whereas actually it’s only part of the picture.

So let me back up and explain that a little bit more. So large language models are frequently correct. So when you ask ChatGPT a factual question, it often gives you the right answer. If you aren’t able to fact check that answer, a lot of times it just sounds right. So it probably is right. And so you kind of get into that habit of trusting it. Well, now and then you find something that you know isn’t right, or you go and fact check something and you find it’s not right. And so people talk about these as being hallucinations, that the system has and there’s all kinds of techniques that are developed to try to limit that and have it just repeat factual information. I feel like this is, sometimes when I hear people talk about this, it feels like foundationally they’re just not thinking about it the right way.

Hallucinations are really a feature, they’re not a bug. That this is the way the whole system was designed was to spit out probable sequences of words that sound like human language. You are the one who takes that language as a human and creates knowledge from it. The knowledge does not reside in there. So it’s not like hallucinations occur because the language model generally stays on this nice narrow track. Sometimes it just kind of goes off the rails and you just have to put it back on the rails and now it’s back on track again. There are no rails, right? It’s just gonna go wherever it goes and it just happens to coincide a lot with things that we know are true.

So I think we need to really get comfortable with that and not try to make large language models do something they weren’t designed to do. The other use cases should really call for where you need creativity and not critical decision-making. I think in general, structured knowledge… Maybe another question for me is what are my predictions for the future? And so I’ll answer my own question. I think structured knowledge will become increasingly valued as a result. I think things like knowledge graphs, ontologies, which really incorporate facts and give like complex networks of facts, a home which is transparent in a way that large language models are not, those are going to play an increasingly important role as complements to large language models.

PHIL:

Karin Golde, thank you for making the time to speak today. It’s been a genuine pleasure and extremely informative. I’m sure people are going to thoroughly enjoy hearing your views on things.

KARIN:

Well, thank you very much.  It was wonderful to be here.

PHIL:

Yeah, I look forward to meeting in person when I’m in the area sometime.

KARIN:

Absolutely.

PHIL:

Thanks again. Bye bye. Thank you.

The post LXT podcast – episode 11: Karin Golde – Founder of West Valley AI appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
LXT podcast – episode 10: Jeff Winter – Hitachi Solutions https://www.lxt.ai/blog/lxt-podcast-episode-10-jeff-winter-hitachi-solutions/ Wed, 08 May 2024 14:30:00 +0000 https://www.lxt.ai/?p=1665 In the 10th episode of Speaking of AI, LXT’s Phil Hall chats with Jeff Winter, digital transformation enthusiast and Hitachi Solutions’ Senior Director of Industry Strategy. In addition to his career as a leading technology and business strategist, Jeff is also the global, number one Industry 4.0 influencer, according to Onalytica.

The post LXT podcast – episode 10: Jeff Winter – Hitachi Solutions appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
In the 10th episode of Speaking of AI, LXT’s Phil Hall chats with Jeff Winter, digital transformation enthusiast and Hitachi Solutions’ Senior Director of Industry Strategy. In addition to his career as a leading technology and business strategist, Jeff is also the global, number one Industry 4.0 influencer, according to Onalytica. Tune in to this episode to learn more about Jeff’s take on LXT’s recently published Path to AI Maturity report, generative AI, the current state of the AI evolution, and how companies can begin, or further, their AI and digital transformation.

Introducing influential technology commentator and Industry 4.0 leader, Jeff Winter

PHIL:

Today’s guest is a multi-award-winning technology expert who has held senior roles in strategy development and related areas at organizations, including Rockwell, Omron, Microsoft, and currently at Hitachi Solutions. He sits on multiple technical and advisory boards, working on strategies for IoT, manufacturing, automation. And according to Onalytica, he is the global number one industry 4.0 influencer, quite an achievement. In his spare time, he’s a highly influential technology commentator with a huge audience, over 100K of dedicated industry insider followers. Please welcome today’s guest, Jeff Winter. It’s great to have you aboard, Jeff.

JEFF:

Thanks for having me here, this will be fun.

Where are companies getting it right and where are they struggling? What advice do you have for companies who want to capitalize on the transformative impact of AI?

PHIL:

Yeah. So in recent weeks, we released the third edition of LXT’s Path to AI Maturity report. For today’s discussion, it will be great to hear your take on the current stage of AI evolution and your interpretation of some of the key findings from the report. So, Jeff, over the course of your career, you’ve seen the evolution of technology and the impact of digital transformation.

With the massive paradigm shift we’re now seeing with generative AI, where are companies getting it right and where are they struggling? What advice do you have for companies who want to capitalize on the transformative impact of AI?

JEFF:

So I might take a slightly different spin on this question. I’m going to actually start with where companies are getting it wrong before we get into anything else. First, from a personal perspective, I don’t think most business leaders understand generative AI or AI in general. And I’m not an AI technical expert, but I can tell when someone really doesn’t understand what they’re talking about. Most business leaders have had no need to understand AI. Up until they were forced to with ChatGPT. The public pushed this on CEOs, making sure that nearly every company, at least publicly traded, had some sort of stance or answer on what they were going to do with AI. And that is why we see it as continually one of the top things talked about by companies according to IoT Analytics in there, what do CEOs talk about most each quarter?

And so whenever I hear anyone say, we need to use AI to fix it or to solve it or to improve it. That’s an immediate red flag for me because AI is such a broad and ubiquitous term that it actually has little meaning except as a useful buzzword. I mean, think about this. If someone asked you a question for your company’s strategy and you responded, oh, well, we’re just going to use the internet to solve that or to grow our company or to improve, or we’re going to use electricity to solve that. How silly does that sound? AI isn’t that much different. So in my experience, most executives can’t tell you the difference between machine learning, generative AI, and deep reinforcement learning. Three massively different forms of AI used in completely different applications.

So the first thing I’m going to say is start with educating yourself just on the bare basics so you understand really what AI is. The second thing I want to say is data management and quality. Many companies struggle with managing the sheer volume of data required for effective AI training, often facing issues with data quality and accessibility.

The push to use AI is just exposing how poor a company’s data quality really is and their data management practices and their data strategy. And third, I would say, is underestimating the cultural change required. Adopting AI is not just a technological shift, but a cultural one. Some organizations underestimate the need for change management, leading to kind of resistance and or underutilization of artificial intelligence capabilities. Your teams need to understand what AI is, how it is changing the industry and how it is changing their jobs, and how it can make them more effective.

So that leads into what advice would I give? Well, first is develop a clear AI strategy. But when I say this, I don’t mean a standalone AI strategy. I don’t want to see an AI strategy document. I want to see AI incorporated into your company’s business strategy. That’s where your AI strategy should live as part of your business strategy, not by itself. You should review and update your company’s entire business strategy to consider how AI can impact both positive and negative all aspects. You don’t need separate AI goals. You need business goals. And how to evaluate AI to understand how it can help those goals and how it can help achieve the strategy for those goals.

Second is embrace a culture of innovation. So encourage a culture that supports experimentation and learning, allowing for failure and continuous improvement in AI initiatives. Because if AI is used correctly, it helps automate some tasks and augment other tasks. So make sure your company encourages experimentation with AI as a tool to help improve both the company and improve each person’s individual job. Think of turning Tony Stark into Iron Man. Don’t think about having a replacement AI robot. Think about taking a person and making them significantly more powerful. There’s a big difference in that mindset. And lastly, I would say is, focus on scalability and sustainability. Ensure that AI implementations are scalable and sustainable considering long-term impacts and how they evolve with the technological landscape.

You mentioned automation and augmentation, could you dig a little bit more into the distinction between the two?

PHIL:

That’s great. And I’m sure people will take a lot away from that. I certainly do. Could you dig a little bit into that distinction between AI for automation versus AI for augmentation? Some maybe use case examples.

JEFF:

Sure, and this is the biggest way I like to break up how to think about AI. Because AI is a tool that helps make decisions. That’s what it is. So you can either use it to help you make a decision, that’s augmenting. And you can use that, for example, for predictive analytics, where you’re analyzing a bunch of data and providing a forecast for you to make a decision based off of the information that the AI provided.

The other is automation. This is where you’re handing over the wheel to the AI where it’s making the decision for you. This is used, think of it, you know, common terms is like a self-driving car is an example. But in terms of like the manufacturing industry, which is where I mainly play, this is about having AI do real-time control of production processes where you’re not intervening at all. It’s figuring out the best way to optimize the line, for example, to produce the highest yield.

What initially piqued your interest in LXT’s AI maturity reports?

PHIL:

Very cool. Okay, so you’ve been a frequent and insightful commentator on LXT’s AI maturity reports. Do you recall what it was that initially caught your eye? Do you think we’ve been asking the right questions?

JEFF:

So good question and the simple answer is I was Googling AI maturity statistics and ran across LXT’s report and I instantly liked the way that they organized the report and especially how they overlaid it to popular models out there that I already knew, like the Gartner model, which helps to kind of anchor your understanding. Now, some of the questions in the report are things that I just haven’t seen any other places, especially business drivers by industry, types of AI used by industry, and my personal favorite types of data used by industry. So do I think that they’re asking the right questions? Yes. And I would say the only thing I would ask for more of is ask more questions expanding your survey pool because the information is great.

Companies are stating that they have reached higher levels of AI maturity in recent reports, what do you think is driving these companies’ self-perceived growth?

PHIL:

Great, and I know you have actually given us some guidance on upgrading the questions, augmenting them in the more recent edition, and Jodie has really appreciated that. So, Jodie Ruby, for anyone that isn’t familiar with her, her name’s not on the cover of the AI report, but she’s the author, she’s the driver behind this. So thanks for your help with that.

Since we released our initial AI maturity report in 2022, we’ve seen some massive shifts in AI awareness and maturity. Back in 2022, just 40% of companies said they’d reached the higher levels of AI maturity. And our latest research has 72% of companies claiming, and bear in mind that when I say claiming, this is self-perception, it’s not objective reality. They’re claiming that…They are now at higher levels of AI maturity. Is this consistent with what you see and what do you think is driving it?

JEFF:

So this is an interesting one because like you said, it’s self-reported, which in fairness is the only way to really conduct a survey. But I would be curious to see how this actually compares against companies, their actual maturity rather than kind of a self-proclaimed perceived maturity. But doing that would require an expert to go in and assess their company. And that’s a massive undertaking. But most reports you see out there in most forms of technology adoption, and even my main field, Industry 4.0, most companies claim that they are higher than their peers. And that’s just not possible statistically if most companies are saying they’re better than their peers, kind of like the studies you see out there of everyone thinks that they’re a better driver than everyone else out there. It’s just not possible.

Now, the proliferation of ChatGPT has single-handedly increased everyone’s interest in the whole field of AI. I can’t think of a single company that isn’t investing in AI or exploring it across the company. That level of awareness, just focusing on figuring out what to do with AI has absolutely increased dramatically. So you can’t even attend a conference or read an industry or tech-related magazine and not have AI be just the key focus, regardless of what industry you attend or what conference you attend.

And because of this, almost everyone is out there learning about the technology, learning about the products that are out there, and learning about the applications. And once you know something better, you immediately think your maturity is automatically higher on the subject, regardless if you’re actually using it or not, you perceive your maturity is higher. But I would say very few, if any companies are actually objectively a fully AI-mature company, even if some of them think they are.

What steps should companies that are still experimenting with AI take to ensure that they don’t get left behind?

PHIL:

Yeah, your answer doesn’t surprise me, but it’s great to have that encapsulated in a summary like that. That’s, yeah. Okay, so if these reported maturity levels were indicative of actual maturity in the marketplace, what steps do you think that companies that are still just experimenting with AI need to take in order to ensure that they don’t get left behind?

JEFF:

So to put it simply, I would say companies that are dabbling in AI, they need to shift from playing with the technology to integrating it into their core operations. AI is one of those technologies that should be the central nervous system of your entire company, because it is one of few technologies out there that really impacts every role in the company, from the frontline worker to the CEO across every single function. And there aren’t that many technologies out there that can claim that. And so the technology is there. And I want people to know that, and it works.

But the biggest killer of AI projects isn’t the tech itself. It’s the resistance from people within the organization, which is a cultural hurdle. Companies must, they have to nurture a culture that is ready and eager to adopt AI, making it clear that AI is a tool to help everyone work better, not a threat to their jobs. There’s several statistics out there that show that the more educated people are on AI, the more open they are to using it in both their personal and professional lives.

People that don’t understand something naturally fear it. And because of how fast ChatGPT has grown, and therefore everyone’s interest in the whole field of AI, you’re getting a lot of resistance from a lot of people. So experimentation is great, but it has to evolve into practical applications. And for that, the whole team’s mindset has to be in sync with the AI-driven direction that the company is heading.

Do you think generative AI is more important than other branches of AI or is this a “drink the Kool-Aid” moment due to the hype around Gen AI and ChatGPT?

PHIL:

Yeah, and I like what you said earlier as well about when you’re in this experimental phase, don’t be afraid of failing. You’ll be more powerful with your experimentation if you take a few risks with the knowledge that failure is one of the possibilities. Okay, so 70% of companies state that generative AI is more important than other branches of AI, and 11% say it’s much more important. Is this consistent with what you see in the real world, or is this just a huge “drink the Kool-Aid” moment?

JEFF:

So from my perspective, no, generative AI isn’t more important than other branches of AI. Rather, it’s one piece of a much larger puzzle. If we consider machine learning as the foundation of AI, it’s like the engine in a car. It’s been powering the AI vehicle for years, mostly out of sight and silently working under the hood. Most companies are already riding in this car without realizing the engine’s complexity because it’s so well integrated into their systems.

In fact, I wish I had the stat right now, but I just saw a statistic today by Statista that shows that machine learning is like a whopping 65%, or something like that, of all AI use. Most people don’t realize how that is still the majority use of AI is machine learning.

Now, generative AI on the other hand is kind of like the car’s flashy dashboard and infotainment system. It’s what people see and interact with. It’s fun and it’s engaging, allowing the average person to play with features, enjoy the ride, and even customize their experience. But without that engine, the underlying machine learning algorithms crunching the numbers, making predictions, and analyzing data,  the dashboard just wouldn’t have much to display.

So in business operations, even the most impressive generative AI applications are typically powered by the heavy lifting done by machine learning algorithms behind the scenes. So as a good example, if you create a ChatGPT-like interface and you ask a question like, what’s the demand forecasting prediction for supply chain? All the generative AI is doing is pulling that information and typing it out in layman’s terms for you to read. The actual prediction is done by machine learning.

PHIL:

Yeah, yeah, we actually saw something quite analogous to this with the evolution of speech technologies where really speech technologies, there’s typically three major components. So there’s the recognition piece, the acoustic piece. There’s the language modeling where it’s doing the language understanding piece. And then there’s the text to speech, which is the generation of actual responses. And again, that’s the piece that people saw. And that was the piece that people judged the entire technology by. Whereas the recognition and the understanding piece was where the real heavy lifting was being done. But people’s reaction was to the interface that they could see, or in this case hear. Yeah, so it’s not exactly without parallels.

What do you think are the top use cases of generative AI? Does it match what companies report in LXT’s Path to AI Maturity?

PHIL:

OK, respondents to the report indicated that the three top uses of generative AI are creating documentation, 38%, improving decision-making, 36%, and marketing, 35%. How does this align with your view, and what do you think is missing from this short list?

JEFF:

So this is probably the question that most people want to know the answer to. Where are people using generative AI and what value are they getting out of it for where they’re using it? Now I’ve read dozens of reports answering this question and they all basically have different answers. Why? Because it all depends on who answers the question and the questions that are actually there or the answers to the question. So generative AI can be used by… Everyone, like I said, remember from the frontline worker to the CEO in every function, every industry. That’s a wide range of people.

So depending on who you ask, you will get different answers. And with that, like I was saying is the answers are usually pre-given by the survey. There’s very few surveys that have free-form answers. They almost always have a select from this list. And those lists are rarely the same.

In the case of what you just said, creating documentation, is an activity, decision-making is an activity, marketing is a department. So you could argue, you know, they’re not exactly mutually exclusive with it. So the irony here though, is that generative AI, if used properly, would be able to actually handle survey responses that are free form and using natural language processing and machine learning could easily cluster them into answer groups.

And then the third thing I would say here is that the big distinction between how companies use generative AI and how people use generative AI are very different and they need to be very much called out. For example, I use ChatGPT every day. I use Microsoft Copilot every day, which uses generative AI, and I use my company Hitachi Solutions Enterprise Chat every day. And those are all at a personal level. All different uses, all generative AI. But then there are company level uses. For example, a marketing department using it, like in what you had, or a company creating an internal ChatGPT to assist supply chain or to create an external chatbot for customer service. Very different. And it can be viewed from two different perspectives, like I said. The company creating it or the person using it,  even if it’s the same use case or application.

PHIL:

Yeah, and when I actually, when I looked at that particular, the responses to that particular question, so here I picked out the three that were the highest scoring, but what I actually found was that the range between the highest scoring and the lowest scoring was not particularly wide. And so in effect, all of the options that were presented to respondents got some reasonable level of uptake and that suggested to me that maybe people just don’t know yet.  As you said, it depends on who you ask and probably what they’re thinking about on that particular day.  It’s a very context-driven. But I did get an impression that it suggested A, that people aren’t sure, and B, that it could be applied to everything. So it could be being the operative part of that.

Data from the report shows what companies are investing in. Where do you think companies should be focusing investment when it comes to rolling out AI?

PHIL:

And the last question on the report here, our data shows that when it comes to budgets for AI, companies are investing most in strategy development and training data, followed by controls, compliance, hardware. Where do you think companies should be focusing investment when it comes to rolling out AI?

JEFF:

So I would say companies should definitely have a clear data strategy before diving into AI. And that kind of partially relates to some of the things you talked about in there.

This means investing in the process of cleaning and organizing their data because no matter how advanced the AI is, it can’t work its magic with messy data. So before dreaming of an AI-driven successful application or use case, it’s critical to roll up your sleeves and scrub the data until it sparkles. Ensure that data is complete, accurate, normalized, contextualized, and it’s just, it’s the real deal. It’s clean data. Clean and prepared is the best thing for AI.

And so it’s one of the few jokes I kind of give in the industry for AI, but the moment someone says that they have an AI strategy and I go, great, can I see your data strategy? And they don’t have one. It’s hard to say that they actually have an AI strategy. So that is one of the most important things I say to figure out is your data strategy. It should answer what data should you be collecting for what purpose. What are you going to do with it? How are you going to collect it? And then AI, how it fits in to take advantage of it so that you can drive action.

PHIL:

Right, so if you don’t have an AI data strategy, you probably don’t have an AI strategy, is that a summary?

JEFF:

If you don’t have a data strategy that goes over what data you should be collecting, for what purpose, how you’re going to collect it, and what you’re going to do with it, it’s hard to have any AI strategy.

Where could companies use AI easily without even knowing anything about their company, their size, their industry, their maturity that has a fairly high success rate?

PHIL:

Great. My closing question. If you were running this interview, what’s the big question that I should have asked you but maybe didn’t?

JEFF:

Oh, I’d have to think about that one. I might actually have to pause and think about that one. I don’t know if I had, I didn’t have that one planned. It’s a good question. How about this? Where could companies use AI easily without even knowing anything about their company, their size, their industry, their maturity that has a fairly high success rate? That’s probably a question that I think should be asked more by companies, not just which type of AI use case has the highest ROI because some of those have the highest ROI, but are also the most expensive to implement. And it depends on your company’s data strategy. So a good example of one that I think is pretty useful for a lot of different industries, not all of them, is taking advantage of your documents. Generative AI specifically is very good at being able to, with natural language processing, is able to take those documents and do stuff with it. And that stuff is where generative AI comes in.

Let’s say you have thousands of contracts in PDF form. You can have it analyze those contracts to look for similarities or abnormalities in future contracts that come in. You could have, if you’re a public company, evaluate companies’ 10K reports that are in PDF form. If you’re a manufacturer and you have user manuals, you might have thousands of user manuals that you could just immediately load in and use those in your generative AI model.

So I would argue it doesn’t even matter what industry you’re in. We’re doing it right now with companies in the real estate industry where they’re We’re doing it right now with companies in the real estate industry where they’re loading in thousands of different real estate listings to be able to pull in that information, make sense of it and summarize it, and determine trends. So there’s a lot of different ways that you can do this just with the PDFs that you have because PDFs have now become standard nature, been around for a decade or more that just become, everyone has hundreds of PDFs on their computer. Take advantage of those. That’s an easy use case that doesn’t matter your maturity or where you’re at.

PHIL:

That’s a great question, a great answer. And I’m really glad that I put you on the spot with that one because I think people will take some real value away from that.

JEFF:

And you did put me on the spot.

PHIL:

Jeff, great talking to you. I’ve been looking forward to this for quite some time. Jodie has sung your praises for a very long time and it’s been a real pleasure to actually meet you in person and talk to you about these contemporary issues. Thanks again for taking part. We really appreciate it.

JEFF:

Thanks for having me here, this was fun.

PHIL:

Great.

The post LXT podcast – episode 10: Jeff Winter – Hitachi Solutions appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
AI in the Real World: Generative AI’s role in empathy-driven Healthcare https://www.lxt.ai/blog/ai-in-the-real-world-generative-ais-role-in-empathy-driven-healthcare/ Tue, 09 Apr 2024 14:39:21 +0000 https://www.lxt.ai/?p=1621 Welcome back to AI in the Real World for another look at AI applications creating real-world value for businesses and consumers. In this iteration of the blog, I am doing a deep dive into AI usage in the healthcare industry as I’ve seen a lot of progress over the years. I’ve gathered some studies that explore whether generative AI can be used to help doctors daily and whether AI can be more empathetic than doctors.

The post AI in the Real World: Generative AI’s role in empathy-driven Healthcare appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>

Welcome back to AI in the Real World for another look at AI applications creating real-world value for businesses and consumers.

In this iteration of the blog, I am doing a deep dive into AI usage in the healthcare industry as I’ve seen a lot of progress over the years. I’ve gathered some studies that explore whether generative AI can be used to help doctors daily and whether AI can be more empathetic than doctors. We have known for a while now that AI-driven analysis can reduce cost, reduce errors, and enhance patient outcomes in medical imaging use cases, but here we are focused on the type of communication.

I’d be the first to acknowledge that when it comes to bedside manner, not all doctors are created equal – compare, for example, the affected irritability of Bones McCoy, the (potentially alcohol-driven) geniality of Hawkeye Pierce, and the manifest narcissism of the prototypical “mad scientist”, Victor Frankenstein. But putting that variability aside for a moment, let’s ask the question: “Can AI be more empathetic than doctors?” Without wanting to make this sound like a “clickbait” article, the answer might shock you!

AI assistant vs. Physician responses to patient questions

A recent study that measured physicians’ responses in terms of empathy showed that patients’ messages that were generated by ChatGPT were preferred over those generated by qualified physicians.

I imagine that this raises as many questions for you as it did for me. Who conducted the study? Who preferred these responses? How often did they prefer these responses? Well, it was published by JAMA Internal Medicine (Journal of the American Medical Association), and it doesn’t get much more prestigious or respected than that. The study presented patients’ questions and randomized responses to a team of licensed healthcare professionals who evaluated the quality of the empathy or bedside manner provided. They found an almost 10 times higher prevalence of empathetic or very empathetic responses from the chatbot.

At this point, armed with the knowledge that generative AI has earned a reputation for bias and hallucination, you are probably thinking that this turned out to be a triumph of “style over substance”, of “form over function”. Me too. But no, it turns out that the panel of expert evaluators also indicated that chatbot responses were rated of significantly higher quality than physician responses – they preferred the ChatGPT responses in 78% of cases.

Does this indicate that we should all be ditching expensive medical appointments in favor of online solutions? I think that is a resounding “no”. While the study tells us that the AI responses were consistently (with statistical significance) preferred both for accuracy and empathy, it doesn’t say anything about the quality in absolute terms of the instances where the physician responses were better. Would any of the less accurate ChatGPT responses, for example, have been life-threatening?

If my new digital doctor is getting it wrong, what I want to know is just how wrong that is. The stakes are high in this domain, and few among us would be willing to play medical roulette as a trade-off for a more empathetic Doctor-Patient experience. But this potential for errors does not render the technology useless – in their conclusion, the authors suggest that AI assistance in the generation of draft responses that physicians can edit might be a practical way in which the technology can be applied. They suggest that pending the results of careful clinical studies, this could improve the quality of responses, reduce the levels of clinician burnout and improve patient outcomes and experiences.

Empathetic, sincere, and considerate scripts for clinicians

In a Forbes article from last summer, Robert Pearl, M.D. also explored the topic of whether doctors or ChatGPT were more empathetic and the results aligned with those of the JAMA study. One of the examples shared came from a New York Times article that reported on the University of Texas in Austin’s experience with generative AI.

The Chair of Internal Medicine needed a script that clinicians could use to speak more compassionately and better engage with patients who are part of a behavioral therapy treatment for alcoholism.  At the time, no one on the team took the assignment seriously. So, the department head turned to ChatGPT for help. And the results amazed him. The app created an excellent letter that was considered “sincere, considerate, even touching.”

Following this creation, others at the university continued to use generative AI to create additional versions that were written for a fifth-grade reading level and translated into Spanish. This produced scripts in both languages that were characterized by greater clarity and appropriateness.

Clinical notes on par with those written by senior internal medicine residents

Referencing his recent study, Ashwin Nayak of Stanford University told MedPage Today that “Large language models like ChatGPT seem to be advanced enough to draft clinical notes at a level that we would want as a clinician reviewing the charts and interpreting the clinical situation. That is pretty exciting because it opens up a whole lot of doors for ways to automate some of the more menial tasks and the documentation tasks that clinicians don’t love to do.”

As was the case for the JAMA study, Nayak is not expecting Generative AI to replace doctors, but did he report that ChatGPT could generate clinical notes comparable to those written by senior internal medical residents. Although the study found minimal qualitative differences in ‘history of present illness’ (HPI) reporting between residents and ChatGPT, attending physicians could only identify whether the source was human or AI with 61% accuracy.

So, does generative AI have a promising future with healthcare professionals? Can it be more empathetic than doctors themselves?

Looking at these studies and use cases, I’d say that the deck is stacked heavily in favor of YES. We are still in the infancy of the technology, and the experiments reported here were carried out using general-purpose models – it is pretty much inevitable that once more specialized models become available the results will be even more compelling.

The post AI in the Real World: Generative AI’s role in empathy-driven Healthcare appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
LXT podcast – episode 9: Meeta Dash – VP Product at Verta.ai https://www.lxt.ai/blog/lxt-podcast-episode-9-meeta-dash-vp-product-at-verta-ai/ Fri, 29 Mar 2024 15:03:14 +0000 https://www.lxt.ai/?p=1619 In the ninth episode of Speaking of AI, LXT’s Phil Hall chats with Meeta Dash, VP of Product at Verta.AI. With a deep background in marketing, Meeta has a unique stance on balancing product management and product marketing.

The post LXT podcast – episode 9: Meeta Dash – VP Product at Verta.ai appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
In the ninth episode of Speaking of AI, LXT’s Phil Hall chats with Meeta Dash, VP of Product at Verta.AI. With a background in marketing, Meeta has a unique stance on balancing product management and product marketing. She shares her insights on Verta’s approach to product development, the different trends driving AI progress, and the obstacles standing in the way of AI reaching its full potential. As we close out Women’s History Month with this exciting episode, tune in to learn more about generative AI trends from the lens of a tech product development expert.

Introducing product management and marketing extraordinaire, Meeta Dash

PHIL:

Today’s guest started her career as a software engineer, but after uncovering a passion for product, she completed an MBA from UC Davis with majors in marketing, technology management, and strategy. Over the past 20 years, she’s held senior product management and product marketing roles at organizations including Infosys, Autodesk, and Cisco. In 2018, just weeks after I left Appen, she joined the company and led the product integration and rebranding effort to bring Figure 8 and Appen together. She is currently VP of product at Verta, where since October 2020, she has led product strategy for Verta’s Generative AI Workbench, setting the vision for their MLOps products. Please welcome today’s guest, Meeta Dash.

Meeta, it’s lovely to have you here.

MEETA:

Thank you, Phil. Nice to have here and looking forward to this discussion.

How do you design and deliver an end-to-end strategy for the development of products?

PHIL:

Great. So, your role, at least as I understand it, is to design and deliver an end-to-end strategy for the development of products to maximize existing customer bases and engage further customers. How do you do this?

MEETA:

Yeah, so product role is generally very interesting, specifically in the tech sector. So, there is like, sometimes you start like a product innovation from scratch, which we kind of did at Verta, where we saw a really big market pain point around operationalizing machine learning. So, folks have a lot of models in development. They have kind of, they figured out how to train a model and make it ready for production, but then what next? How do they move those models to production, run it at scale, and guarantee safety and quality, right? So from a product standpoint, it was very interesting to see that market pain point. There was no platform or tooling that could help folks standardize that process. So just to give you an example of Verta, the way we build a product strategy is really first understand the customer pain point, understand there is a market gap… And then apply technology.

I’m a big believer that technology comes next. The first thing that comes is a customer pain point and a market need. And then technology, you figure out this technology is a good fit. So that’s kind of one of the approach that I generally take that works well. And then when it comes to scaling a product, you have a product and then you’re looking for growth sectors, that’s another area where, again, technology and new trends play a key role. If you think about generative AI right now, one and a half years back, generative AI was nowhere, right? Then this technology came up and then everyone saw a big opportunity and that’s where the product and management and product strategy teams need to figure out that, hey, there is a technology that can really play a big role in innovating and taking the market forward. 

How do we figure out, how can we build the right product around it? And to address that need, again, that is, I see a lot of need to understand again, do more user interviews, understand user needs, and also worry about the user experience because most of these products really deal with, like your end users. It could be a business user, it could be, you know, like in a B2C space, it’s a normal consumer, but then having a right experience, building a right user experience is really critical because when we’re building tech products, it can be very complicated, so the adoption will suffer.

So when you’re thinking about growing and increasing adoption, then in my view, thinking through user experience, not just technology, also is critical. So in a nutshell, I would say understanding market pain points, customer needs, and building the right product with the right user experience that can really help scale and make the user sticky. That’s kind of the role of a product manager.

With your background in marketing, how often are you blending your marketing expertise with the product development and creation process?

PHIL:

Yes, certainly making the product sticky resonates very strongly. On a related note, you are a VP of product with an MBA in marketing, and that makes sense. But…If it’s the role of the VP of product to ensure that the products your organization creates are a match for its target audience, how often are you driving the creation of products that your target audience knows it needs and how often are you putting on your marketing hat and selling something that you know they need, but they perhaps don’t?

MEETA:

I think with machine learning and AI space, it’s kind of 60-40. Forties, they know I need it. 60% is they are struggling, but they don’t know whether that is a right tech or product fit. So it’s a combination of both. And then, but the new technology, it’s like, you have to sell the future, sell the vision and talk about, hey, there is this possibility. And for that, like creating like proof of concepts,  showing, like, what is feasible really helps a lot.

But if you work closely with users, you understand their problems. And then you come up with some ideas and then you show them, hey, this is possible. And then they kind of lighten up, wow, I have never thought about it. So that’s kind of a really interesting experience that I have gone through in the past few years.

Is the combination of a product and technology background with marketing common in product development?

PHIL:

Is that combination of product and technology background with marketing, is that a common combination or are you a fairly unique case?

MEETA:

I would not say I am unique, but I’ve seen product managers coming from different backgrounds. I’ve seen folks who have, I have team members who come from a completely humanities background, not any engineering. So marketing, maybe having a tech background helps, but then from other streams also, like folks who come from pre-sales or sales. And they do it really well, really good job.

So, at the end of the day, it’s about like, there is some intuition, like you have to think of when you’re talking to customers or users are looking at the market. There is like hardcore data that you look at, but then there is a future intuition that you build that helps. And then I feel the most important thing is really user empathy. Like if you no matter you’re from sales, from customer success or from engineering, as long as you have user empathy and you’re trying to solve their problem versus thinking about, okay, I need to sell this product. You’ll build really great product.

Are you building solutions for a better-informed audience today than you were in the past?

PHIL:

Yep, that certainly makes sense to me. LXT – the company I work for – has published an AI maturity report. We’ve done it annually for we’re just about to publish our third edition. And that was, sorry, just about to we’re recording this ahead of time. By the time this interview comes out, it will actually have been released.

Now, one of the key findings is that the proportion of organizations with a clear AI implementation and utilization strategy has grown steeply. A year ago, it was a minority of organizations. Now it’s a clear majority. Are you seeing a similar change? Are you building solutions for a better-informed audience today than you were in the recent past?

MEETA:

I think for last, I don’t know about last year, but definitely when I was at Appen and Figure 8, the landscape was very different than what it is today. I mean, we were at that time, we were seeing a lot of folks who were just thinking about building ML models and running POCs, but now folks have a clear strategy. They’re thinking about governance, risk control. They’re thinking about like how to run at scale in production, so those problems are, were not really thought about two, three years back. So definitely that’s a clear signal that we are getting from the market.

The other trend that we are also seeing is GenAI. I mean, it’s making AI much more accessible to companies. That could be one trend that is also impacting that. Folks who have not traditionally thought about machine learning, now they are much more aware of it.

PHIL:

Well, that’s a good segue into my next question. So our survey is focused on AI in the broader sense, but it has showed that currently the majority of companies are placing a higher priority on generative AI products than they are on other AI projects. Is this just a result of hype and what’s in the news or is it actually directly related to goals and objectives and practicalities?

MEETA:

To be honest, I think it’s a combination of both. There is definitely a hype which helped like companies really being challenged, because no matter which industry you are in, whether it banking, insurance, healthcare, you will stay behind, and you’ll not have an competitive edge if you are not in this space, right? So generative AI, the hype helped.

The other thing that really helped is like all the OpenAI and ChatGPT, right? The consumers have access to it. Now internally, the companies who have lagged behind, their users have access to ChatGPT and they are using it. And then their IT team is  waking up and thinking, hey, Now we are, folks are using ChatGPT, we should block them or maybe we should have a strategy in place to make sure that if they are seeing value, we should move this forward. So I think those strengths really helped a lot.

What are your views on the availability of quality training data?

PHIL:

The survey also showed that one of the most significant bottlenecks for generative AI deployment is the availability of quality training data. It also indicated that the vast majority of respondents expect the need for training data to continue to increase. What are your views on this?

MEETA:

So you’re talking to someone who has worked in the training data platform for years. So I have seen the pain point around it. I have actually built Chatbot in the past at Cisco. And we literally that time struggled a lot with getting quality training data. And that problem was eventually solved.

So, it’s directly, like the quality of your product is directly related to the quality of your training data, the amount of training data. And then it’s not just you don’t pause when the model is in production, you continue to do that because the model encounters so many edge cases with GenAI.

These models are generative, so, there is a high likelihood of hallucination, unpredictable outcome. So you keep evaluating and you kind of make sure that your data is, your models are really fine-tuned for your use case because you are using a general-purpose model and that’s not going to work for you all the time. So I think the need for training data, if anything, it will increase and the quality, I would stress more on the quality of the training data.

What are the biggest obstacles to AI reaching its full potential?

PHIL:

Okay, that certainly resonates with what we’re seeing in the marketplace, that’s the case, but obviously for a business that works in this space, we’re always very interested in the answer to that question. So what do you see as the biggest obstacles to AI reaching its full potential?

Are they technical, regulatory, financial, something else?

MEETA:

I’m less worried about financial because their technology can solve. Like I know there are folks who are thinking about cost and large models. How do you run them? So that technologically we can challenge, we can solve those challenges. I think what worries me, what I have seen, like we have been in a lot of executive round table discussions. And the trend that I’m seeing is folks are trying either AI or generative AI proof of value projects inside their organization. They’re seeing benefits. But where they’re stuck at this point is they do not have the right governance or right framework to ensure that they can follow safety practices and ensure quality and compliance in the right way so that these things can actually be productionized.

So I think that’s where a lot of companies are kind of dragging their feet, right? In some cases, they’re fine using these for their internal tools, internal purposes, but for external use or for high-risk scenarios, I believe that is the current bottleneck. So I think governance and those controls and quality is a big challenge. Regulatory, yes, we have so many regulatory rules that’s coming up. So that will pose a threat for sure. I think it’s a good thing. I think it’s not a threat, it’s a good thing that it will just keep us on our toes, to make sure that we are following the best practices.

How environmentally sustainable do you think the continued growth in the large language models domain will be?

PHIL:

Great. Another potential blocker that I’m seeing, I’m not seeing it addressed with high frequency, but I’m but I am seeing it come up. So forgive me if this is a fairly long setup to the question here, but it’s the environmental cost. So a recent MIT report quantified this in fairly frightening terms. They said the cloud now has a larger carbon footprint than the entire airline industry. A single data center might consume an amount of electricity equivalent to 50,000 homes. And training one AI model can emit more than 300 tons of carbon dioxide equivalent, which is about five times the lifetime emissions of an average American car.

Now, I haven’t fact checked the precision of these numbers, but it’s MIT, so I’m expecting that this is reasonably well researched. And in any case, directionally, it is well documented that contemporary data centers in particular, in general rather, and LLM generation in particular, require massive compute power, and they have a correspondingly large carbon footprint. How environmentally sustainable is continued growth in this domain? Do we reach a point where we simply can’t capitalize on the power of the AI, the large language models, without actually paying a huge environmental debt?

MEETA:

I think this is a big problem right now. I kind of acknowledge that and I have also read a bunch of articles around it. So one thing that our company that we are investing right now, I think that this is very relevant in this space is large language models are great, but there is an increasing trend towards can we achieve a similar quality with smaller models? Can we use large language models and distill it down? Task-specific or use case-specific models. And that’s something that I was reading from Microsoft. They are also exploring those fields.

So I think there will be a trend where folks will start thinking about smaller models and achieve similar generative capabilities. And I think we all need to acknowledge that this is a, there could be big companies who can afford to run those data centers, large data centers, but is it really the path forward for us, or do we need to be more innovative about it and solve this problem? So we are actually solving that with model distillation and we are seeing increasing trends and I see next two years there will be a focus on cost control and like being more environmentally conscious about these models.

PHIL:

Yes, I suppose. And if you think about it, linking the environmental concerns to cost control, it might be hard to persuade people to address environmental concerns, but if they are addressed by addressing cost, it’s not hard at all to convince people.

MEETA:

That’s true, that’s true.

Who are some of the women that have inspired you throughout the years?

PHIL:

Yeah, I’m a firm believer that alignment of interests can achieve all kinds of things. You are doing good for the environment, but at the same time you are financially much better off.

So March is Women’s History Month. And I think we were we’re planning on publishing this interview on the last day of March as a closing point for Women’s History Month. Who are some of the women that have been particularly inspiring to you?

MEETA:

I’m inspired by a lot of women leaders and philanthropists. Like I follow them over LinkedIn and Twitter, like folks like Melinda Gates and Michelle Obama, being a few of them.

But again, in my view, I’m really fortunate to work with really good women leaders, peers, and my team members. And these women really inspire me a lot more because I work with them day to day. I see their struggle, I see their challenges and I see how they overcome those challenges and kind of pave their own ways.

So those are more realistic women who I interact with day to day and I get more inspired when they, even if the wins are very small, I see them happening, and I get inspired, to give some examples: like I had a team member who was new to product management and she was struggling with public speaking. And I saw how she challenged herself and overcame that and she became a really star player in the team. So I’ve seen folks who have struggled in their personal life and they kind of managed both their career and their personal life really well. So I think I get more inspired by day to day women that I interact with a lot more.

As a technology leader, what does the representation of women in technology look like to you?

PHIL:

Yeah, fair enough. I work with a pretty inspiring team of women myself. So I completely get that. Now, I have a generally optimistic view about social evolution. My optimism is shaken every now and then, but I take a generally optimistic view. When I spoke to Pradnya Desh earlier this month, I asked her how she felt about the representation of women in technology and whether she thought that proactive affirmative action was needed. She responded by pointing out that only 2.2% of venture capital in the US goes to companies founded by women.

Does this resonate at all with your own experience as a technology leader?

MEETA:

It does. Fortunately, again, I’m working in a company which is a female-led company. She’s a founder and CEO. In the past, I had my managers who have been female leaders, but across the board, I’ve seen, I have been in meetings where there are 20 men and I’m the only female in the room. So I’ve seen that happening a lot in my career. So in AI, in tech and in AI, in general, like last few years, I’m seeing more women coming into leadership roles and other roles, but still the growth rate is much slower than what it should have been. And in the VC and startup space, for sure, it’s much, it’s even weak.

PHIL:

Yes, I can remember being particularly inspired in some years ago when Justin Trudeau became Prime Minister of Canada and he was asked by a journalist, the question was, one of the priorities for you was to have a cabinet that was gender balanced – why was that so important to you? And his answer was because it’s 2015, which I thought was a great and inspiring answer, but it was 2015 and it doesn’t feel like we’ve made as much progress as we should have done. So I tend to agree with you that things are perhaps unrelentingly moving in the right direction, but it would be great if they moved a bit faster.

How do we make sure that we are working towards a world where the intelligent systems that we are developing are working in favor of humanity and not against it?

PHIL:

So my last question, what is the big question that I should have been asking you, but perhaps I haven’t? And what is the answer to that?

MEETA:

That’s an interesting question. We have covered a lot of grounds. There is one trend, like a lot of talks happening not just in tech space but in general in the consumer world around AGI’s, right. The automated general intelligence, all the things that happened with OpenAI in you know last year that thing became very prominent right where we are thinking, folks are thinking, the machines will be smarter than humans. These intelligent agents will create more agents and this kind of, there can be a scenario for a sci-fi movie that can happen to us.

So I can talk about that, but I would also like to hear your opinion because you have been in the training data space. You are more close to the AI world as well. You’re seeing what’s happening in the field. So you can also relate to that. I think it’s a very interesting conversation and we should all acknowledge that there is a possibility not right now but in future, how do we make sure that we kind of work towards a world where… the intelligent systems that we are developing really works in favor of humanity, not against it, right?

So all the things that we are discussing about governance or compliance, regulatory rules that we should follow, those things need to be discussed more and not just from an angle that here, is it good for my business? Is it going to make financial benefit for me versus whether this is something that can help me, help us in future as a human race. We are doing greater good for the humanity.I don’t know if you have any opinion around it.

PHIL:

I always have an opinion. Well, during the course of doing this interview series, related questions have come up fairly frequently, and I have generally been somewhat surprised that the people I’ve spoken to have tended to be tended to downplay that and say, look, it’s just not really a concern. And they’ve given a very optimistic view. And these are people that I have a very high level of respect for. I choose my guests carefully.

So yeah, I have harbored some sort of robotic doomsday concerns. I read Isaac Asimov as a kid. It was his books were always this, there was always this theme running through thereof, having guardrails to keep robots in service of humanity and finding interesting ways for those guardrails to break down and yeah. The experience of reading those things is still with me. I can’t help but look at this and think Isaac Asimov might have been right on the money writing these things, probably in, I think, late 40s or even 1950s. The scenarios he imagined were remarkably close to where we find ourselves today or where we might find ourselves in the near future.

So yeah, I probably harbor greater levels of concern than most of the people I’ve spoken to. I see scenarios where technology is always going to take instructions literally, I guess. And when you do take things literally, all kinds of bad things can happen. People write good laws with good intentions, and then you find ludicrous instances where judges are forced to treat the law as it’s written, and as it’s written nobody foresaw the situation and it’s being literally interpreted and the outcome is not what the lawmakers had in mind.

I won’t dive into US politics and things related to that, but yeah, I’m sorry, long answer, but yes, I think that, I think it would be naive of us to not keep this somewhat front of mind. I certainly don’t want to stop progress. I love all the upside of what we’re seeing. And I think that the upside can continue to dominate. But, yes, we should be cautious. And I’m a risk taker by nature, by the way. So it’s something else for me to say. We should be cautious.

MEETA:

That’s perfect. The outcome may not be as dramatic as being pointed out, but there is a risk, that’s what we need to acknowledge.

PHIL:

Well, Meeta, thank you so much for joining us today. It’s been great to speak with you. I’m sorry that we actually missed the opportunity to work together six or seven years ago when we nearly overlapped. And I hope we’ll interact together in the future. But thanks again for doing the interview.

MEETA:

Me too, me too. I really enjoyed speaking with you and I’m glad that you have invited me.

The post LXT podcast – episode 9: Meeta Dash – VP Product at Verta.ai appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
Highlights from our executive survey: The Path to AI Maturity 2024 https://www.lxt.ai/blog/highlights-from-our-executive-survey-the-path-to-ai-maturity-2024/ Thu, 14 Mar 2024 13:07:35 +0000 https://www.lxt.ai/?p=1601 We are thrilled to share the results of the third annual Path to AI Maturity report, featuring insights based on our survey of over 300 enterprise organizations in the US. This year we included a special section on Generative AI given the frenzy in the industry over this technology.

The post Highlights from our executive survey: The Path to AI Maturity 2024 appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
We are thrilled to share the results of the third annual Path to AI Maturity report, featuring insights based on our survey of over 300 enterprise organizations in the US. This year we included a special section on Generative AI given the frenzy in the industry over this technology. Our research was conducted between late 2023 and early 2024, reflecting over a year of activity since the launch of ChatGPT.

Here are some of the key takeaways from this year’s research:

72% of organizations have reached the higher levels of AI maturity.

Each year as part of the survey, we ask participants to place their companies on the Gartner AI Maturity Model. According to this year’s results, 72% of organizations claim to have reached the higher levels of maturity, from Operational (AI in production, creating value) to Transformational (AI is part of business DNA) status.

The most significant shift is in the mid-range, where nearly a third of companies state that AI is now in production and creating value. To achieve this, half of all organizations invest between $1 million and $50 million, and more than ten percent reported an AI budget between $50 million and $500 million. 

Nearly seventy percent of companies say that Generative AI is more important than other AI initiatives.

According to 69% of respondents, generative AI is more important to their organizations than other AI initiatives, and 11% say it is much more important. The three top uses for generative AI include creating documentation (38%), improving decision-making (36%) and marketing (35%). Only 1% of all organizations do not use generative AI solutions.

Organizations value data quality over quantity for time-to-market success.

The majority of respondents stated that their needs for training data will increase in the next two to five years. And nearly twice as many stated that data quality (62%) is more important than data volume (38%) for AI project success, a finding in line with today’s data-centric AI approaches. Top categories for the ROI of high-quality training data include time-to-market acceleration (32%), higher success rates of AI programs (32%), and increased customer satisfaction (31%).

Learn more in the full report

In our report with the full findings, you can learn more about how organizations are achieving AI maturity, including insights by industry.

Download the full findings here.

The post Highlights from our executive survey: The Path to AI Maturity 2024 appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
LXT Unveils The Path to AI Maturity 2024 Executive Survey https://www.lxt.ai/blog/lxt-unveils-the-path-to-ai-maturity-2024-executive-survey/ Thu, 14 Mar 2024 13:06:24 +0000 https://www.lxt.ai/?p=1599 This year’s report highlights a major shift from experimentation and pilot tests to AI in production, along with the importance of generative AI to the majority of organizations TORONTO – March 14, 2024 – LXT, an emerging leader in global AI training data, today released its third annual executive survey, The Path to AI Maturity. This latest report found that

The post LXT Unveils The Path to AI Maturity 2024 Executive Survey appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>
This year’s report highlights a major shift from experimentation and pilot tests to AI in production, along with the importance of generative AI to the majority of organizations

TORONTOMarch 14, 2024LXT, an emerging leader in global AI training data, today released its third annual executive survey, The Path to AI Maturity. This latest report found that organizations have taken large steps, with nearly three-quarters (72%) reporting that they have reached higher levels of AI maturity. The most significant shift is in the mid-range, where nearly a third of companies state that AI is now in production and creating value. To achieve this, half of all organizations invest between $1 million and $50 million, and more than ten percent reported an AI budget between $50 million and $500 million. 

“2023 was the year of AI and became an imperative for companies of all sizes,” said Mohammad Omar, LXT Founder and CEO. “Now more than ever, organizations realize that AI is table stakes to remain competitive, and they are taking the necessary steps to grow in AI maturity. These include the prioritization of powerful technologies such as generative AI, and the use of high-quality training data.”

LXT, in partnership with research firm Censuswide, commissioned a survey in late 2023 of 322 senior decision-makers (more than half from the C-suite) with verified relevant AI experience at US firms with annual revenue of over $100 million and a company size of more than 500 employees.

A boost in enterprise AI maturity – From experimentation to AI in production and creating value

In the survey, executives were asked to place their companies on the Gartner AI Maturity Model, and nearly three-quarters (72%) report that they have reached one of the higher levels of maturity (compared to 48% in the 2023 report and 40% in 2022). These stages range from Operational, where AI is in production and creating value, to Transformational, where AI is part of the business DNA.

But the greatest shift in AI maturity across all three years is represented by the move from Active to Operational, with the largest jump in this year’s survey. Nearly one-third (32%) report that their organizations have reached the Operational stage, the first of three mature stages. This stage is now the most common AI maturity level, whereas in past years it was the Active stage where companies are undergoing initial experimentation and pilot projects with AI.

To fund these moves, half of all organizations invest between $1 million and $50 million, and more than ten percent (13%) reported an AI budget between $50 million and $500 million.

Generative AI – Nearly seventy percent state it is more important than other AI initiatives

According to 69% of respondents, generative AI is more important to their organizations than other AI initiatives, and 11% say it is much more important. The three top uses for generative AI include creating documentation (38%), improving decision-making (36%) and marketing (35%). Only 1% of all organizations do not use generative AI solutions.

Organizations are sourcing the data to train their generative AI solutions by using internal data (36%), collecting data externally through a third party (35%), collecting data externally themselves (35%), and by using publicly available data sets (32%). The biggest bottlenecks include security and privacy concerns (39%), accuracy of the output (38%), and the availability of quality training data (36%).

Despite the value placed on generative AI applications, only 12% of companies have deployed them so far, and only 5% say that they deliver the greatest return on investment (ROI).

Data-centric AI – Organizations value quality over quantity for time-to-market success

The majority of respondents stated that their needs for training data will increase in the next two to five years. And nearly twice as many stated that data quality (62%) is more important than data volume (38%) for AI project success, a finding in line with today’s data-centric AI approaches. 

Top categories for the ROI of high-quality training data include time-to-market acceleration (32%), higher success rates of AI programs (32%), and increased customer satisfaction (31%).

To learn more and review the complete findings, download The Path to AI Maturity report.

About LXT

LXT is an emerging leader in AI training data to power intelligent technology for global organizations. In partnership with an international network of contributors, LXT collects and annotates data across multiple modalities with the speed, scale and agility required by the enterprise. Our global expertise spans more than 145 countries and over 1000 language locales. Founded in 2010, LXT is headquartered in Toronto, Canada with a presence in the United States, UK, Egypt, Turkey and Australia. The company serves customers in North America, Europe, Asia Pacific and the Middle East. Learn more at lxt.ai.

Contact

info@lxt.ai

The post LXT Unveils The Path to AI Maturity 2024 Executive Survey appeared first on High-Quality AI Data to Power Innovation | LXT.

]]>