The large language model (LLM) most commonly used (or planned for use) in the production phase of AI projects is OpenAI’s GPT (58% of survey respondents).
While AI is booming, only 17% of companies have successfully implemented it. Discover the challenges and opportunities revealed in an exclusive Gatepoint Research survey of 100 C-suite executives.
Artificial intelligence is poised to shape the future of nearly every industry. By 2030, AI is expected to contribute more than $15 trillion to the global economy (Bloomberg 2024). However, the success of AI depends on data and data that meets standards of quality, security, compliance and sovereignty.
While most organizations have an AI roadmap in place and have made AI investments, many are still only in the exploration or project development stage. Only 17% have moved their AI projects into production. They appear to struggle with understanding AI’s complexity, specifically how the currently used solutions handle data privacy, security, compliance, sovereignty and quality. Since data is central to AI strategy, data-related challenges can be the main barrier to AI success.
There’s also concern over the potential that their customers’ perception of their AI use will shift from currently being either positive or neutral to more negative if even one bad experience occurs, which is very likely if risks to data security, compliance and sovereignty are not managed. The neutral customer perception deemed by 42% could be due to their customers not understanding AI and how it is used to deliver real-life experiences. Also, organizations may not understand how their customers’ perception of their AI use in product and service delivery impacts their reputation.
To get deep insights into where major organizations are on their AI journey, what challenges they’re experiencing in executing their AI strategy and what business outcomes they expect from their AI investments, a survey was conducted by Gatepoint Research of 100 executives, all of whom are from the C-suite. Several industries were represented, including software, manufacturing, business services, finance and insurance. This white paper delves into their feedback, along with our analysis and a look at the potential future of AI and data.
Only 17% have moved their AI projects into production. They appear to struggle with understanding AI’s complexity, specifically how the currently used solutions handle data privacy, security, compliance, sovereignty and quality.
Over 90% of executives have started down the AI path, with 17% already in production. However, the survey respondents’ organizations notably fall into one of two distinct yet evenly divided groups: one we call the trailblazers and the other we call the laggards. The 30% who are “actively developing solutions” and the 17% who have “GenAI use cases in production” and are “optimizing production deployments” are the trailblazers. The 22% who are “experimenting with technology” and the 22% who are “identifying possible use cases” are the laggards.
The fact that only 17% of organizations have production-ready AI use cases is telling and reflects the immaturity of the market right now. Many organizations are still developing their AI strategy, considering the best use cases and exploring the potential business outcomes. Due to the rapid innovation in the space, they may be waiting for it to stabilize before making a major investment or simply waiting for others to blaze the trail and take on the risks. However, like moving to the cloud many years ago, organizations know they need to move to AI use to be innovative, deliver stand-out customer experiences, remain competitive and drive revenue. As survey responses indicate, their hesitancy is partly due to trying to understand and figure out how to work with AI’s complexity and its risks, such as issues with data security and privacy, data quality and data sovereignty.
Organizations need to organize and leverage their data properly to adopt generative AI in a secure way. They need to profile the risks and address privacy and data sovereignty concerns, which may partly explain why the laggards are moving slowly on AI. They see there’s risk associated with using private or proprietary data to fuel their GenAI story. By doing a deep assessment of how their data is prepared and used for GenAI, they can develop and deliver new products that are compliant with their customers’ expectations and not create data breaches or other data security mishaps.
The fact that only 17% of organizations have production-ready AI use cases is telling and reflects the immaturity of the market right now.
Since the GenAI field is still growing, organizations in the experimentation stage with AI are adopting off-the-shelf models to start their prototyping and increase the speed of their testing. However, there’s the expectation they will move to using Open Source models once they understand the outcomes of the prototype and they proceed to analyzing production deployments as they will see that using off-the-shelf models is too costly and open source solutions enable end-to-end control without the limitation of working with a particular provider.
The large language model (LLM) most commonly used (or planned for use) in the production phase of AI projects is OpenAI’s GPT (58% of survey respondents). Other off-the-shelf models are also used, including AWS/Bedrock (31%), Google Cloud/Gemini (25%), and Anthropic/Claude (15%).
Right now, the focus is on accelerating innovation and not on whether to use proprietary or open source LLMs. However, regardless of which tool or model organizations are using for AI, questions still need to be answered about where private data is and whether there is data compliance and sovereignty.
The large language model (LLM) most commonly used (or planned for use) in the production phase of AI projects is OpenAI’s GPT (58% of survey respondents).
Nearly all executives reported they believe their customers are generally positive (47%) or at least neutral (42%) about the use of GenAI (8% were “extremely positive”). The nearly 50% positive perception presents itself as a solid driver for innovation and potentially driving costs down while enhancing customer experiences. The neutral numbers suggest that many executives believe customers don’t completely understand GenAI and are waiting to see if it is positive or negative, perhaps because they haven’t seen anything real yet. They may not realize that GenAI is used in the products and services delivered to them.
It’s encouraging that only 3% believe their customers’ perception is negative. However, a lack of awareness or understanding about GenAI and its complexity opens up the possibility that customers move from being neutral or even positive to being negative, especially if data quality, privacy or security concerns arise or something such as a data breach occurs. There is a reputation risk when a GenAI project goes into production and there is no proper data privacy and security handling. If private data is not managed well, it could be exposed to third parties or leaked.
Organizations need to learn from the mistakes made during past technological transformations and consider the risk of something going wrong. To minimize risks, they must ensure data security, compliance and sovereignty.
Almost three-quarters of executives consider private and proprietary data to be critical to ensuring success with their AI strategy. Interestingly, that means 26% believe it is only moderately important or not important at all. Again, this points to the need to understand the complexity of AI and the risks of exposing proprietary data. There may not be concern if AI is only used to generate blogs, but for more advanced use-cases, proprietary data is needed to train, fine-tune, and add context to models in order for them to work properly and accurately.
Almost three-quarters of executives consider private and proprietary data to be critical to ensuring success with their AI strategy.
Executives overwhelmingly (80%) say data sovereignty and control is as critical to strategic success as the use of private and proprietary data.
Since organizations are making huge investments into AI, they need to be in complete control of the solution, aiming to utilize cloud-agnostic and data-control solutions. The more control they get through being cloud-agnostic — control of their data, the flow, the transformation, and the data models — the better they can produce cost-effective, future-proofed solutions.
Companies in regulated industries, in particular, should be making significant investments in data sovereignty solutions. Having that data control and sovereignty in place mitigates the risks of data exposure. By minimizing risk, companies can best sway customers’ neutral perceptions in the positive direction.
Executives overwhelmingly (80%) say data sovereignty and control is as critical to strategic success as the use of private and proprietary data.
Most organizations are experiencing challenges with executing their AI roadmap, the top two being data security and privacy (58%) and data availability and quality (53%). This isn’t surprising given data is a critical component of AI strategy. These results expressly confirm that organizations struggle with data-related readiness for AI and still have a lot of work to do before moving to production AI.
Organizations may be laggards because of these challenges but also due to other blocks cited, such as a lack of strategy (12%), board-level investment (14%) and speed to deployment (12%). The latter is interesting to note given that many organizations seem on the fence about AI, even with the rapid innovation in the space.
Since 75% of organizations prioritize the use of private and proprietary data, many may be stuck in laggard mode because they lack data availability or quality and have concerns about data security and privacy.
More than 25% of executives are primarily concerned about data breaches, while 17% are worried about not complying with regulatory standards and 16% fret about data sovereignty. This speaks to the importance of properly modeling data before adding it to any AI service.
Again, there is the potential for a negative impact on customer sentiment if data risks are not managed and there is a breach or non-compliance. The customer AI perception needle could easily move from positive or neutral to negative and company reputation could be destroyed.
Most executives rate open source and transparency as moderately important, with more than a third leaning toward very important. The fact that open source and transparency are not currently sole deciding factors explains the high adoption of off-the-shelf models like OpenAI and Bedrock, however, there is clearly a critical focus among business leaders on mitigating risks and ensuring security while achieving successful production deployment.
With open source tools, organizations can reduce vendor lock-in and enhance control over both data and the models used. Executives do recognize that open source is the likely key to building future-proof solutions, which would explain the tendency to start leveraging more open source models.
The GenAI providers are constantly changing and we don’t know where they will be in one year or even who will be the best. So, while organizations are fine with using off-the-shelf models as they experiment with GenAI tools, they ultimately want to have the opportunity to swap a provider with another provider, or not rely on a specific infrastructure, and using open source would enable that
More importantly, open source tools can give organizations better control over the end-to-end data journey and the assurance that they know exactly how and where their data is being used.
The GenAI providers are constantly changing and we don’t know where they will be in one year or even who will be the best.
The number one business outcome that 26% of executives want to achieve from their investments in data and AI in the next 3-5 years is new revenue, which primarily comes from innovation. Delivering innovation on new products (15%) and improving the existing customer experience (15%) are just as important as cost reduction (16%) and competitiveness (15%).
This feedback shows organizations have a clear focus on growth and innovation and making the customer experience a priority versus cost-cutting. It points away from the theory that AI displaces jobs and toward AI’s ability to create value. It also reveals a customer-centric mindset, as organizations view AI as a way to deliver new products that enhance their customers’ experience.
This mindset reflects a focus on competitiveness through value delivery.
AI Projects are presumably getting funded because organizations need to develop a new capability to differentiate themselves in the market instead of saying, “We can save 30%.” While cost savings are still important (37% of the survey respondents were CFOs), innovation is a top priority.
Even though businesses can see the outcomes that AI and Data investments can potentially drive, there still appears to be a hesitancy to be an AI and Data trailblazer. Their balancing of risk versus reward can become a critical point and eventually lead them to be late to the market with their innovation.
According to McKinsey, companies that are data-driven outperform their competitors by up to 20% (McKinsey 2023).
People are looking for amazing experiences and organizations are seeking to deliver them through software and AI. AI technology will be inserted into the processes and products of at least 90% of new enterprise apps by 2025 (ISC).
Data, and the applications it powers, are at the heart of AI. According to McKinsey, companies that are data-driven outperform their competitors by up to 20% (McKinsey 2023). However, organizations are struggling to unlock value, and at a time when they face doing more with less, skills shortages, cloud complexity and legacy technologies getting in the way of innovation, experimentation and differentiation. Less than a third (29%) of organizations are able to evaluate data fast enough to stay on top of their game (Gartner).
Organizations need the ability to harness the power of their data with choice but without complexity. Aiven delivers this through one unified data and AI platform for organizations to stream, store and serve all their data on the clouds of their choice.
Aiven is:
Aiven Platform unlocks value at the touch of a button, providing seamless data mobility, robust security and compliance, and optimized access to AI and ML services. It puts data in good shape for the business and the business in good shape for innovation.
Everyone benefits:
Organizations invest in AI to achieve several different business outcomes. While cost reduction is always important, the biggest driver is generating new revenue, which generally comes from innovation. Providing value with revolutionary products and an optimized customer experience are primary goals.
With the introduction of GenAI, many organizations assumed that since it was easy to use, it would be simple to manage. But that is not the case. Organizations need to secure all their data and how they pull it from their data sources, and unify those data sources and aggregate data so that personal (PII) data is removed. However, this data journey is typically done across several technologies.
With Aiven, organizations have one unique solution provider that can satisfy all the data journey needs across their technologies. They have one security model, one provider and one partner for demonstrating compliance and quickly addressing any problems that might appear in any of the single technologies or the integrations between them.
As a player in the GenAI market, Aiven has incorporated it into its platform. For Aiven, it’s not about paring down expenses and the workforce. Their focus is driving revenue through achieving better outcomes for the customer and figuring out the best way to do that.