SMB & Midmarket Analytics & Artificial Intelligence Adoption


    SMB & Midmarket Security Adoption Trends


    Channel Partner Trends


    2024 Top 10 SMB Business Issues, IT Priorities, IT Challenges


    2024 Top 10 Partner Business Challenges


    SMB & Midmarket Predictions


    Channel Partner Predictions


    SMB & Midmarket Cloud Adoption


    Networked, Engaged, Extended, Hybrid


    Influence map & care-abouts


    Connected Business


    SMB & Midmarket Managed Services Adoption


    SMB Path to Digitalization

Techaisle Blog

Insightful research, flexible data, and deep analysis by a global SMB IT Market Research and Industry Analyst organization dedicated to tracking the Future of SMBs and Channels.
Anurag Agrawal

Midmarket Firms Piloting GenAI with Multiple LLMs, According to Techaisle Research

The landscape of GenAI is rapidly evolving, and midmarket firms are striving to keep pace with this change. New data from Techaisle (SMB/Midmarket AI Adoption Trends Research) sheds light on a fascinating trend: adopting multiple large language models (LLMs), an average of 2.2, by core and upper midmarket firms. Data also shows that 36% of midmarket firms are piloting with an average of 3.5 LLMs, and another 24% will likely add another 2.2 LLMs within the year.

The survey reveals a preference for established players like OpenAI, with a projected penetration rate of 89% within the midmarket firms currently adopting GenAI. Google Gemini is close behind, with an expected adoption rate of 78%. However, the data also paints a picture of a dynamic market. Anthropic is experiencing explosive growth, with an anticipated adoption growth rate of 100% and 173% in the upper and core midmarket segments, respectively. A recent catalyst in midmarket interest for Anthropic is the availability of Anthropic’s Claude 3.5 Sonnet in Amazon Bedrock.

This trend towards multi-model adoption signifies a crucial step – midmarket firms are no longer looking for a one-size-fits-all LLM solution. They are actively exploring the functionalities offered by various models to optimize their specific needs.

However, the data also raises questions about the long-term sustainability of this model proliferation due to higher costs, demand for engineering resources (double-bubble shocks), integration challenges, and security. Additionally, market saturation might become a challenge with several players offering overlapping capabilities. Only time will tell which models will endure and which will fall by the wayside.

Furthermore, the survey highlights a rising interest in custom-built LLMs. An increasing portion of midmarket firms (11% in core and 25% in upper) will likely explore this avenue. In a corresponding study of partners, Techaisle data shows that 52% of partners offering GenAI solutions anticipate building custom LLMs, and 64% are building SLMs for their clients, indicating a potential shift towards smaller specialized solutions.

techaisle midmarket multimodel genai

Why Multi-Model Makes Sense for Midmarket Firms

The journey from experimentation to full-fledged adoption requires a strategic approach, and many midmarket firms are discovering the need to experiment with and utilize multiple GenAI models. There are several compelling reasons why midmarket firms believe that a multi-model strategy might be ideal:

Specificity and Optimization: Various LLMs specialize in different tasks. Midmarket firms believe they can benefit from a multi-model strategy, using the best-suited model for each purpose. This may enhance efficiency and precision across a broad spectrum of use cases. Since GenAI can reflect biases from its training data, a multi-model approach also serves as a safeguard. Combining models informed by diverse datasets and viewpoints ensures a more equitable and efficient result.

Future-Proofing: LLMs are rapidly advancing, offering a stream of new features. Without a visible roadmap from LLM providers, midmarket firms hope to benefit from using various models to stay current with these innovations and remain flexible in a dynamic market. As business requirements shift, a diversified model strategy enables modification of their GenAI tactics to align with evolving needs. This strategy permits businesses to expand specific models to meet increasing demands or retire outdated ones as necessary.

Despite the benefits, midmarket firms are also experiencing challenges

High Cost: LLMs have a high price tag, particularly for smaller midmarket companies. Creating and maintaining an environment that supports multiple models leads to a substantial rise in operational expenses. Therefore, a small percentage of midmarket firms are conducting a thorough cost-benefit analysis for every model and optimizing the distribution of resources to ensure financial viability over time. Managing and maintaining multiple LLMs is time-consuming, as different models have varying data formats, APIs, and workflows. Developing a standardized approach to LLM utilization across the organization has been challenging, and a lack of engineering resources has surfaced.

Specialized Skills: Deploying and leveraging multiple LLMs necessitates specialized skills and knowledge. To fully capitalize on the capabilities of a diverse GenAI system, it is essential to have a team skilled in choosing suitable models, customizing their training, and integrating them effectively. Midmarket firms are investing in training for their current employees or onboarding new specialists proficient in LLMs.

Integration Challenges: Adopting a multi-model system has benefits but can complicate the integration process. Midmarket firms are challenged to craft a comprehensive strategy to incorporate various models into their current workflow and data systems. The complexity of administering and merging numerous GenAI models necessitates a solid infrastructure and technical know-how to maintain consistent interaction and data exchange among the models.

Midmarket Firms Intend to Adopt DataOps to Develop GenAI Solutions Economically

While large enterprises have shown how effective DevOps can be for traditional app development and deployment, midmarket firms notice that conventional DevOps approaches may not fit as well for emerging AI-powered use cases or GenAI. Techaisle data shows that only half of the midmarket firms currently have the necessary talent in AI/ML, DevOps, hybrid cloud, and app modernization. Although DevOps is great for improving the software lifecycle, the distinct set of demands introduced by GenAI, primarily due to its dependence on LLMs, poses new hurdles.

A primary focus for midmarket firms is ensuring a steady user experience (UX) despite updates to the foundational model. Unlike conventional software with updates that may add new features or bug fixes, LLMs are built to learn and enhance their main functions over time. As a result, while the user interface may stay unchanged, the LLM that drives the application is regularly advancing. However, changing and or even swapping out these models can be expensive.

DataOps and AnalyticsOps have emerged as essential methodologies tailored to enhance the creation and deployment of data-centric applications, much like those powered by GenAI. DataOps emphasizes efficient data management throughout development, ensuring the data is clean, precise, and current to train LLMs effectively. Conversely, AnalyticsOps concentrates on the ongoing evaluation and optimization of the GenAI applications' real-world performance. Through persistent oversight surrounding user interaction, DataOps and AnalyticsOps empower midmarket firms to pinpoint potential enhancements within the LLM model without requiring extensive revisions, facilitating an incremental and economical methodology for GenAI enrichment. Ultimately, midmarket firms are considering adopting DataOps and AnalyticsOps with a strategic intent to adeptly handle the intricacies inherent in developing GenAI solutions. By prioritizing data integrity, continuous performance assessment, and progressive refinement, these firms hope to harness GenAI's capabilities cost-effectively.

Final Techaisle Take

The success of GenAI implementation probably hinges on a multi-model strategy. Firms that effectively choose, merge, and handle various models stand to fully exploit GenAI's capabilities, gaining a considerable edge over competitors. As GenAI progresses, strategies to tap into its capabilities must also advance. The key to future GenAI advancement is employing various models and orchestrating them to foster innovation and success.

Anurag Agrawal

Securing the Future: Cisco's Innovative Leap in Security and Observability

Today's cybersecurity landscape is a complex maze, with a multitude of vendors contributing to a convoluted and intricate security stack. The evolution of security from traditional perimeter defenses around private data centers to a distributed network of branch offices, remote workers, and IoT devices has necessitated a radical shift in security strategies, with a focus on enforcement points across the network. At its core, security is a data challenge, where the sheer volume of data often hinders the identification of actionable insights, leading to an imbalanced signal-to-noise ratio and the prevalent issue of alert fatigue. Effective data connection across control points is crucial to transform low-level alerts into critical insights that demand immediate action.

Under the visionary leadership of Jeetu Patel, Executive Vice President and General Manager of Security and Collaboration, Cisco's security product portfolio has undergone a transformative evolution. This radical re-envisioning of security paradigms has significantly refined Cisco's security cloud solutions, streamlining the adoption process for an integrated security platform. In response to the complexities of distributed environments, Cisco introduced 'Hypershield,' a pioneering expansion of the hyper-distributed architecture concept tailored to meet the demands of hyper-distributed security. The strategic acquisition of Splunk has further fortified Cisco's capabilities, enabling it to manage the signal-to-noise ratio effectively. Leveraging Splunk's advanced data analytics, Cisco aims to mitigate alert fatigue by converting many low-level events into meaningful, actionable insights.

cisco security cisco live 2024

The Birth of the Cisco Security Cloud Platform

In June 2022, Cisco introduced the Cisco Security Cloud Platform at the RSA Conference, a visionary solution designed to streamline the complexity of managing disparate security tools. This platform offers a unified experience, ensuring secure connections for users and devices to applications and data, irrespective of location.

The platform's emphasis on openness provides a comprehensive suite for threat prevention, detection, response, and remediation at scale. At its core is a powerful firewall, enhanced with AI for superior analysis. Identity management is flawlessly integrated, allowing every Cisco security product to leverage AI-driven insights and user authentication.

Cisco addressed the challenge customers faced with the vast array of security products—approximately 30 products with over 1,000 variations—by significantly simplifying its portfolio. Customers now have a choice of three intuitive suites: User Protection, Cloud Protection, and Breach Protection. These suites are not merely bundled; they are fully integrated, facilitating seamless communication and improved functionality, making security management far more straightforward and efficient.

Tackling Hyper-Distributed Security with Cisco Hypershield

As an industry analyst, I am convinced that Cisco's recent strides in security innovation are nothing short of impressive. The 2023 launch of Cisco Multi-cloud Defense, Cisco XDR, Cisco Secure Access, and advanced firewall functionalities marked a year of significant progress. The introduction of Cisco AI Assistant was a testament to its commitment to continuous innovation. In 2024, Cisco took a giant leap by introducing Hypershield, a sophisticated, AI-enhanced, cloud-native security system set to redefine cybersecurity.

Anurag Agrawal

A Comprehensive Look at Dell AI Factory and Strategies for AI Adoption

The rapid pace of AI innovation, coupled with the complexity of implementation, creates challenges for many businesses. Concerns around data security, intellectual property, and the high costs of running and managing AI models further complicate their AI journey. This is where Dell steps in, leveraging its extensive expertise in AI and innovative solutions to help businesses navigate these challenges. The company focuses on developing data management solutions, launching powerful computing hardware, and building partnerships to ensure businesses are equipped for the demands and opportunities of AI.

As part of its commitment to democratizing AI, Dell unveiled the Dell AI Factory at the recent Dell Technologies World (DTW) conference in May 2024. This unique initiative stands out for providing customers access to one of the industry's most comprehensive AI portfolio, from device to data center to cloud. The AI Factory, a distinctive combination of Dell's infrastructure, expanding partner ecosystem, and professional services, offers a simple, secure, and scalable approach to AI delivery. Its objective is to integrate AI capabilities directly within data sources, transforming raw data into actionable intelligence and thereby enhancing business operations and decision-making processes. In addition, Dell announced new channel programs to foster collaboration and accelerate AI adoption, recognizing the vital role of channel partners in driving revenue. With Dell's AI Factory, businesses can confidently embark on their AI journey, knowing they have a trusted partner to guide them every step of the way.

Understanding the AI Factory

To adopt AI on a large scale, a robust infrastructure is crucial. Conventional IT setups designed for regular computing often struggle to meet the complex demands of AI workloads. This is where the concept of an AI Factory becomes significant. Picture it as a specialized center with powerful computing systems, advanced data processing tools, and a team of AI experts. The AI Factory is designed to streamline AI solutions' development, deployment, and scaling, making it easier and faster. By consolidating these elements, an AI Factory ensures that AI innovations can be swiftly created and applied, reducing delays and increasing efficiency, thereby simplifying the complex process of AI deployment for businesses. With Dell's AI Factory, businesses can feel relieved of the implementation challenges, knowing they have a trusted partner to guide them every step of the way.

The Dell AI Factory simplifies AI deployment by offering essential components like servers, storage, and networking in one place. This streamlined approach eliminates the need for businesses to find and combine these components separately – and ensures they work well together, saving significant time and resources. Customers also gain access to Dell's AI expertise and a reliable ecosystem of partners. This comprehensive solution empowers businesses to choose from individual products or create custom configurations to fit their AI needs. The Dell AI Factory also offers different consumption models, including purchases, subscriptions, and as-a-service options, providing businesses the flexibility to adopt AI at their own pace. With Dell's comprehensive AI portfolio, businesses can feel secure knowing they have all the tools they need for successful AI adoption.

The Dell AI Factory is not just a collection of products. It is a comprehensive solution designed to simplify AI integration for businesses of all sizes.  Whether a business, like SMBs, is starting small with PCs or deploying AI across a server network, the Dell AI Factory equips the customers with the tools and expertise to achieve real-world results.

This powerful combination of high-performance infrastructure, industry-leading services, and deep AI knowledge can empower businesses to embrace AI confidently.  The Dell AI Factory goes beyond just hardware, offering a complete package that simplifies the entire AI adoption process, making Dell a key player in accelerating real-world AI applications. 

dell ai factory slide sg v6

Dell AI Factory Infrastructure

Training and deploying AI models require significant computational power and vast datasets. While convenient for many businesses, public cloud solutions can become expensive for these resource-intensive tasks and introduce security risks and the potential for IP infringement. Businesses increasingly seek on-premises solutions for greater control over data and resources and cost optimization. The Dell AI Factory addresses these challenges by providing a robust foundation built on Dell's core strengths in infrastructure solutions—servers, storage, data protection, and networking. This robust infrastructure delivers the necessary computational muscle and storage capacity for AI workloads.

Anurag Agrawal

Unveiling the Future of AI: IBM's Unwavering Commitment to Innovation and Collaboration

Enterprises are grappling with AI's transformative power, a technology poised to automate and elevate both creative and analytical tasks. Despite its early stages, AI adoption holds immense growth potential. Recognizing this, IBM took center stage at Think 2024, showcasing its unwavering commitment to AI innovation.

IBM's leadership in driving enterprise AI is evident through its three key pillars: open-source initiatives, which democratize AI and make it more accessible; leveraging the combined expertise of its consulting arm and ecosystem partners, which ensures the highest quality of AI solutions; and significant updates to the watsonx platform, which enhance the performance and capabilities of AI. These updates are about technological advancements and making AI more accessible and impactful for businesses worldwide. By prioritizing openness, affordability, and flexibility, IBM is breaking down barriers and paving the way for widespread AI adoption.

Open-Source Innovations and InstructLab

AI, a field deeply rooted in open collaboration, has been shaped by a tradition that dates back to its inception. Consider Alan Turing's groundbreaking 1950s paper, 'Computing Machinery and Intelligence,' which introduced the world to AI. This vision turned into reality thanks to the open sharing of work by countless researchers worldwide, laying the foundation for the AI we know today. IBM, a torchbearer of this tradition, continues to place open-source innovation at the core of its latest AI initiatives, inviting users and businesses to be part of this rich and impactful tradition.

IBM has unveiled a family of Granite models, a significant addition to the open-source AI ecosystem. These models, with parameter counts ranging from 3 billion to 34 billion, have been trained in 116 programming languages. They are available in base and instruction-following variants, offering a wide range of applications from complex modernization to bug fixing. These models represent some of IBM's most advanced language and code capabilities and are available under Apache 2.0 licenses on collaborative platforms such as HuggingFace and GitHub. This exciting development opens up a world of possibilities.

IBM's approach to AI development sets it apart from other major companies. While many have chosen to release pre-trained models, withholding the datasets used for training, IBM has taken a different path. It has offered open-source models, democratizing AI development and inviting clients, developers, and experts worldwide to explore new AI advancements in enterprise settings. This unique strategy, coupled with IBM's commitment to quality and efficiency, ensures that these models consistently generate high-quality code superior to many alternative large language models (LLMs) and excel at various code-related tasks, surpassing larger open-source counterparts.

“We firmly believe in bringing open innovation to AI. We want to use the power of open source to do with AI what was successfully done with Linux and OpenShift,” IBM CEO Arvind Krishna at IBM’s Annual Think Conference.

ibm think2024

To further its commitment to open-source AI, IBM has announced InstructLab, an open-source project designed to address challenges in fine-tuning LLMs for specialized tasks. This project focuses on scalability by efficiently handling large volumes of data for model training and specialization by tailoring models to specific industry needs.

Research You Can Rely On | Analysis You Can Act Upon

Techaisle - TA