The State of AI & LLMs
Artificial Intelligence continues to be at the forefront of technological advancement, significantly influencing various industries worldwide.
As we step into 2024, the landscape of AI is rapidly evolving, driven by breakthroughs in machine learning, data science, and innovative applications. To capture the current trends, challenges, and opportunities in the AI industry, we conducted a survey targeting professionals and practitioners in the field. This report presents the findings from the survey, providing insights into the state of AI from those who are living and breathing the industry and shaping the projects of tomorrow. Through this analysis, we aim to highlight the key trends, understand the common challenges, and foresee the future direction of AI development and deployment.
ðŸ”
The Importance of MLOps, LLMOps, and Feature Stores
MLOps and LLMOps are essential frameworks that enable organizations to efficiently develop, deploy, and manage AI/ML models. MLOps focuses on standardizing and streamlining the ML lifecycle, from data preparation and model training to deployment and monitoring. LLMOps extends these practices to large language models, addressing the unique challenges of LLM applications such as prompt management, tracing and debugger and inference optimization.
Feature stores play a critical role in the MLOps pipeline by providing a centralized repository for storing, managing, and serving features used in machine learning models. They enable consistent and reusable feature engineering, ensuring that the same features are available during both training and inference, thereby improving model accuracy and reliability.
These frameworks and tools are crucial for organizations to deliver AI applications effectively, ensuring operational efficiency, model performance, and scalability.
What phase are your AI initiatives currently in?
POC/Demo
Beta
Production
Analysis
AI initiatives are mainly in the Beta phase (43%), indicating extensive testing and gradual rollouts of new initiatives. A significant portion is also in Production (38%), showing mature deployment. The presence of projects in the POC / Demo phase (19%) highlights ongoing innovation.
CTO Insights
The high percentage of AI initiatives in the Beta and Production phases show how eager companies are to integrate AI into their existing workflows. Over the next year, we anticipate a higher number of projects to transition into the production phase.
Moving to production would require organizations to integrate a variety of tools and processes to maintain quality of service, security and compliance. Organizations would have to create novel processes, from legal and compliance to product and engineering, which may result in delays in transitioning into production.
Do you have LLM applications in production? If not, do you have plans to do so in the future?
Yes
No, but I'm planning to
Analysis
A substantial majority (78%) have LLM applications in production, reflecting widespread adoption and trust in these models. The remaining 22% planning future deployment indicates continued interest and growth.
CTO Insights
Seems like everyone working on AI initiatives want to either have it live in production, or has actively been planning to do so. The increased efficiency and additional benefits of real world AI applications make it a top priority for many organizations.
What is your budget range for generative AI in 2024?
$1-0K
Over $40K
$10-50K
Analysis
The majority of respondents (59%) allocated $10-100K for generative AI, presenting a significant investment. An additional 18% plan to invest over $100K, indicating that they have high confidence in generative AI's potential.
CTO Insights
Organizations worldwide recognize the pivotal role of LLMs and AI in driving business efficiency. The budgets for generative AI reflect its importance and anticipated ROI. Companies are investing significantly in these technologies, expecting them to drive innovation, competitive advantage, and future cost savings.
These high budgets, however, make it challenging to achieve sustainable unit economics in AI applications. As large models require large amounts of compute and expensive GPU machines, there’s a race to reduce inference and training costs. The high costs associated with AI are some of the main blockers of its usage in production.
As we expect AI budgets to remain high, we believe that inference costs and token generation costs to reduce significantly over the coming years.
What is your current role?
Software Engineer
0%
Executive / C-Level
0%
Product Manager
0%
Machine Learning Engineer
0%
Data Scientist
0%
Prompt Engineer
0%
Other
0%
Analysis
The largest group is Software Engineers (34%), followed by Executives/C-Level (26%) and Product Managers (20%). This diverse representation ensures comprehensive insights from technical, strategic, and managerial perspectives.
CTO Insights
AI initiatives are multi-faceted by nature, requiring a variety of tools and disciplines, from prompt engineering to model deployment and software engineering. Creating a production ready AI product requires this variety of skills to tackle the non-deterministic nature of LLMs. It’s only natural that we’ll see a wide range of roles involved in these projects, from product manager , prompt engineers and data scientists.
As AI and LLM initiatives grow, we’ll see prompt engineers, software developers, PMs and analysts taking an active role in shaping and working on them. Part of the reason is the fact that LLMs provide a higher level of abstraction, allowing non-technical people to take an active part in shaping AI-based products.
Which AI providers are you using?
OpenAI
0%
Anthropic
0%
Vertex AI
0%
AWS Bedrock
0%
Cohere
0%
Self-hosted models
0%
AI21
0%
Other
0%
Analysis
OpenAI is the dominant provider (87%), followed by Vertex AI (45%) and AWS Bedrock (37%). The use of self-hosted models (21%) indicates a preference for customization and control among some organizations.
CTO Insights
OpenAI continues to dominate the LLM market due to their highly-performant models and Azure private deployments. Anthropic comes right after, with their capable Claude models served through AWS Bedrock and their native API. Claude models are affordable and performant, providing an increasingly popular competition to the OpenAI GPT-4 series.
The high rate of adoption of models on AWS Bedrock and Google Vertex shows that companies favor deployments on their own cloud environments, for additional privacy, security and stability.
The usage of open-source models is expected to grow as well, as capable models such as Llama-3 and Mistral (and their future generations) are released. These models enable companies to save costs, improve performance, and fine-tune models for their own needs.
What prevents you from deploying LLM Apps today?
Talent or relevant expertise
0%
Cost
0%
Integrations with internal systems
0%
Security & privacy
0%
Lack of monitoring tools
0%
Analysis
Integration with internal systems (52.94%) is the biggest barrier, followed by cost (35.29%) and security/privacy concerns (29.41%). Talent and lack of monitoring tools are also significant challenges.
CTO Insights
Integrating AI and LLM in real-life workflows is challenging. Enterprises are still using a variety of platforms and systems tailored for their own use-cases. Combine this with the unique challenges integrating AI, and you’re up for a real challenge. The non-deterministic nature of LLMs demand safeguards, checks and processes, to maintain the highest level of quality. Achieving this, while connecting to the various platforms in every company, and staying compliant, is challenging.
A top concern for companies is security and privacy. First, LLMs require a lot of data to provide valuable responses. These models are hosted on different providers, outside the cloud perimeter of organizations. Creating secure and compliant AI applications will remain top priority for organizations, and it follows that we’ll see more bespoke security solutions tailored for large enterprises.
Who handles Prompts in your organization?
Developers
spacer
0%
Prompt Engineers
0%
Product Managers
0%
Data Scientists
0%
Analysts
ghost
0%
Analysis
Prompt handling is primarily managed by Developers (52%) and Prompt Engineers (49%), with significant involvement from Product Managers (45%) and Data Scientists (36%).
CTO Insights
Prompts are critical assets for AI applications. Crafting the right prompt is a mix of experimentation, trial and error and best practices. As crafting prompts doesn’t require technical expertise, we see the rise of Prompt Engineers and PMs in handling prompts. Ensuring collaboration among these roles can enhance the effectiveness and efficiency of AI initiatives. The introduction of LLMOps practices can streamline prompt management and improve outcomes. We see the importance of prompts in AI applications continue as AI initiatives move into their production phases.
What challenges are you facing with LLM applications?
Prompt management
0%
Model evaluation
0%
Usage and cost tracking
0%
LLM workflow debugging
0%
RAG pipelines
0%
Managing testing environments
0%
Analysis
The main challenge is LLM workflow debugging (49%), followed by usage and cost tracking (42%) and managing testing environments (38%). Prompt management and model evaluation are also significant issues.
CTO Insights
Debugging and evaluation remain top priorities for taking AI applications to production. New tools for debugging and evaluating LLM applications are increasingly being improved. We believe that more tools will eventually be created around LLMOps and tailored specifically to handle the unique challenges in LLM applications.
What is your most significant challenge in deploying LLMs?
Budget
0%
Lack of expertise
0%
Don’t have the right tools
0%
Security & privacy
0%
Lack of monitoring
0%
Analysis
Lack of expertise (29%) is the most significant challenge, followed by security & privacy concerns (25%) and budget constraints (21%).
CTO Insights
Security and talent shortages are critical challenges in deploying LLM applications. AI models bring security concerns including data privacy, integrity, and compliance with regulations like GDPR. Additionally, the non-deterministic nature of LLMs introduces new security vulnerabilities and attack vectors. On the talent side, deploying LLMs requires specialized skills in a rapidly changing world, making it difficult for organizations to effectively train and hire relevant teams to handle these complex applications.
Which use-cases are you currently working on?
Software developments
0%
Knowledge management
0%
Customer Service
0%
Recommendations
asfas
0%
Text summarization
0%
Other   1%
Analysis
Software development (63%) and knowledge management (53%) are the top use-cases, followed by customer service (42%) and recommendations (32%).
CTO Insights
Companies leverage AI in text-intensive processes. These naturally include code-generation, customer support and knowledge management. So far, these areas have also proven to be the most valuable in integrating AI for the current generation of LLMs. As models become more powerful, and unique integrations are created, we’ll see AI and LLMs integrated in new business processes in new domains as well.
Was there a budget increase in 2024 for GenAI initiatives within your organization?
81-100%
0%
Budget decreased
0%
41-80%
0%
0%
0%
21-40%
0%
1-20%
0%
Analysis
The responses indicate a wide range of budget changes for GenAI initiatives in 2024, with the majority of organizations reporting budget increases. Here is a breakdown of the findings:
Significant Budget Increase
A combined 23% of respondents reported substantial budget increases ranging from 41% to 100%. This indicates that nearly a quarter of the organizations are making strong financial commitments to advancing their GenAI capabilities, reflecting high confidence in the potential ROI and strategic importance of these technologies.
Moderate Budget Increases
The largest group, 45%, reported moderate budget increases of 1-20%. This suggests a cautious yet positive approach to expanding GenAI initiatives, likely balancing investment with measured expectations of return on investment (ROI).
Modest to No Increase
21% of respondents indicated budget increases of 21-40%. This moderate level of increase shows a commitment to GenAI, albeit at a more conservative level compared to those with higher budget increases. Additionally, 9% reported no budget increase (0%), indicating that a small segment of organizations are maintaining their current level of investment without additional funding.
Budget Decrease
Only 1% of respondents reported a budget decrease, suggesting that very few organizations are reducing their financial commitment to GenAI initiatives.
CTO Insights
As companies use LLMs and models in a variety of ways, a need for a higher budget is generated. Even though inference cost is slowly dropping, use-cases are becoming more common, which in turn increases the budgets for AI. We believe that companies that want to leverage AI will still have to spend higher budgets in the coming years.
Conclusion
The 2024 State of LLMs survey provides a comprehensive overview of the current trends, challenges, and investment strategies in the AI industry. The diverse responses highlight the multifaceted nature of AI adoption and the critical areas where organizations are focusing their efforts. As AI continues to evolve, addressing the identified challenges and leveraging the insights from this survey will be crucial for driving future advancements and maximizing the impact of AI technologies.
Demographics
Age
18-24
0%
25-34
0%
35-44
0%
44-54
0%
>54
0%
Education
High School
0%
Vocational/Technical College
0%
University
0%
Post-graduate
0%
Number of employees
26-50
0%
51-100
0%
101-250
0%
2510-500
0%
501-1000
0%
1001-5000 management
0%
Organizational role
Owner or
Partner
ghost
0%
President/
CEO/
Chairperson
0%
C-level
executive
ghost
0%
Middle
management
ghost
0%
Chief
Technical
Officer
0%
Senior
management
ghost
0%
Product
Manager
ghost
0%
Other
ghost
ghost
0%
Country
United States
0%
United Kingdom
0%
Canada
0%