Scaling AI like a tech native: The CEO’s role - Deepstash
Scaling AI like a tech native: The CEO’s role

Scaling AI like a tech native: The CEO’s role

14 IDEAS

377 reads

Scaling AI like a tech native: The CEO’s role

Keep reading for FREE

Embedding AI across an enterprise to tap its full business value requires shifting from bespoke builds to an industrialized AI factory. 

For AI to make a sizable contribution to a company’s bottom line, organizations must scale the technology across the organization, infusing it in core business processes, workflows, and customer journeys to optimize decision making and operations daily.

Achieving such scale requires a highly efficient AI production line, where every AI team quickly churns out dozens of race-ready, risk-compliant, reliable models.

25

187 reads

As AI has matured, so too have roles, processes, and technologies designed to drive its success at scale. Specialized roles such as data engineer and machine learning engineer have emerged to offer skills vital for achieving scale.

A rapidly expanding stack of technologies and services has enabled teams to move from a manual and development-focused approach to one that’s more automated, modular, and fit to address the entire AI life cycle, from managing incoming data to monitoring and fixing live applications.

22

42 reads

A best-in-class framework for ways of working, often called MLOps (machine learning operations), now can enable organizations to take advantage of these advances and create a standard, company-wide AI “factory” capable of achieving scale. Since MLOps is relatively new and still evolving, definitions of what it encompasses within the AI life cycle can vary.

One needs to examine the potential improvements from four essential angles: productivity and speed, reliability, risk, and talent acquisition and retention. 

22

38 reads

Moving AI solutions from idea to implementation takes nine months to more than a year, making it difficult to keep up with changing market dynamics. Even after years of investment, leaders often tell us that their organizations aren’t moving any faster.

Companies applying MLOps can go from idea to a live solution in just two to 12 weeks without increasing headcount or technical debt, reducing time to value and freeing teams to scale AI faster.

22

31 reads

Achieving productivity and speed requires streamlining and automating processes, as well as building reusable assets and components, managed closely for quality and risk, so that engineers spend more time putting components together instead of building everything from scratch.

22

26 reads

Another critical element for speed and productivity improvements is developing modular components, such as data pipelines and generic models that are easily customizable for use across different AI projects. By building a central AI platform and modular premade components on top, the company was able to industrialize a base AI solution that could rapidly be tailored to account for different drug combinations in each market.

22

12 reads

Organizations often invest significant time and money in developing AI solutions only to find that the business stops using nearly 80 percent of them because they no longer provide value—and no one can figure out why that’s the case or how to fix them. In contrast, we find that companies using comprehensive MLOps practices shelve 30 percent fewer models and increase the value they realize from their AI work by as much as 60 percent

22

8 reads

While a robust risk-management program driven by legal, risk, and AI professionals must underlie any company’s AI program, many of the measures for managing these risks rely on the practices used by AI teams. MLOps bakes comprehensive risk-mitigation measures into the AI application life cycle.

Reusable components, replete with documentation on their structure, use, and risk considerations, also limit the probability of errors and allow for component updates to cascade through AI applications that leverage them.

22

8 reads

The availability of technical talent is one of the biggest bottlenecks for scaling AI and analytics in general. When deployed well, MLOps can serve as part of the proposition to attract and retain critical talent. Most technical talent gets excited about doing cutting-edge work with the best tools that allow them to focus on challenging analytics problems and seeing the impact of their work in production.

22

4 reads

Implementing MLOps requires significant cultural shifts to loosen firmly rooted, siloed ways of working and focus teams on creating a factory-like environment around AI development and management. Building an MLOps capability will materially shift how data scientists, engineers, and technologists work as they move from bespoke builds to a more industrialized production approach. As a result, CEOs play a critical role in three key areas: setting aspirations, facilitating shared goals and accountability, and investing in talent.

22

4 reads

As in any technology transformation, CEOs can break down organizational barriers by vocalizing company values and their expectations that teams will rapidly develop, deliver, and maintain systems that generate sustainable value. CEOs should be clear that AI systems operate at the level of other business-critical systems that must run 24/7 and drive business value daily. While vision setting is key, it pays to get specific on what’s expected.

22

3 reads

Among the key performance metrics CEOs can champion are the following:

  • the percentage of models built that are deployed and delivering value, with an expectation of 90 percent of models in production having real business impact
  • the total impact and ROI from AI as a measurement of true scalability
  • near-real-time identification of model degradation and risks, including shifts in underlying data (particularly important in regulated industries).

22

6 reads

One of the fundamental litmus tests for impact is the degree to which goals are shared across business leaders and the respective AI, data, and IT teams. Ideally, the majority of goals for AI and data teams should be in service of business leaders’ goals. Conversely, business leaders should be able to articulate what value they expect from AI and how it will come to fruition.

22

3 reads

Newer roles needed on AI teams have emerged, like that of the machine learning engineer who is skilled in turning AI models into enterprise-grade production systems that run reliably. To build out its ML engineering team, a North American retailer combined existing expertise of internal IT developers who understood and could effectively navigate the organization’s systems with new external hires who brought broad experience in MLOps from different industries.

22

5 reads

3

It's time to
Read like a Pro.

Jump-start your

reading habits

, gather your

knowledge

,

remember what you read

and stay ahead of the crowd!

Save time with daily digests

No ads, all content is free

Save ideas & add your own

Get access to the mobile app

2M+ Installs

4.7 App Rating