In a nutshell:
- AI and ML offer compounding advantages when done right, with early adopters outperforming competitors.
- Specialized LLMs offer more accurate and domain-specific results compared to large language models.
- There is a growing demand for AI and ML talent, with automation freeing up time for more impactful work.
- Low- and no-code interfaces are on the rise, allowing users to build predictive models without extensive coding knowledge.
- AI for good initiatives are gaining traction, but careful consideration is needed to balance intentions with potential risks.
When done right, AI and machine learning (ML) technologies offer “compounding advantages” for organizations, with early successful adopters outperforming competitors by “two to six times on total shareholder returns (TSR).”
Unfortunately, while there’s much promise for these technologies, many of the most talked-about (and hyped-up) trends fall short. One newsworthy example was the recent trial and error of the McDonald’s AI order taker, which misinterpreted human orders in the drive-through with meme-worthy results. Until the brand could fix the algorithm that caused over $200 in nuggets to be added to an order, it had to shelve the technology.
In response to AI’s hype, McKinsey senior partner Eric Lamarre cautions against trying new tech for the sake of following trends, even going so far as to call some GenAI applications “technology in search of a problem.” The McDonald’s example could be seen by some as a distraction from meaningful and well-thought-out digital transformation efforts.
Rather than chase the next trend and find somewhere to fit it into your workstream, Lamarre emphasizes starting with your business problem. With that said, there are some tried and true trends that we don’t see disappearing anytime soon. The following AI and ML trends offer scalability and value, especially for forward-thinking organizations with the discipline to use them in truly applicable ways.
AI and ML Trend #1: An increase in specialized LLMs
Large language models (LLMs) like ChatGPT and Claude offer incredible advantages by helping businesses create new product offerings with just a few basic questions. However, these LLMs draw from text sources in every domain, meshing together data from a law journal with lyrics from a pop album to create an answer that hopes to satisfy the average user’s requests. As some have put it, “ChatGPT’s training data is extremely wide but not very deep.”
Specialized LLMs, which take data sets from specific and vetted sources, offer more domain-specific results. These smaller language models focus only on relevant data and deliver answers more in line with the intent of a domain’s specific user demographic. They’re more sustainable, too, as they need fewer resources to update the smaller pool of datasets and take less time and talent to maintain.
Why it matters
While there’s no real consensus on how often AI will hallucinate or create a fake outcome, tools like ChatGPT fail at basic math in up to one out of ten instances. This lack of accuracy may be inconvenient when using it to help with a child’s homework but would be reckless in use cases dealing with health diagnostics or important business decisions.
This may make reliability the most pressing issue with AI-generated outcomes, as companies can’t afford to risk business decisions on the large, Wild-West style models driving consumer models today. Data engineers have a better chance of controlling smaller data pools, keeping hallucinations and bad advice to a minimum. These smaller LLMs may also capture the nuance of an industry’s unique terminology so that answers have more meaning for the fields that seek them.
Where it’s going
These small language models (SLMs) do more than just narrow the scope of an unwieldy amount of knowledge; they offer promise in model efficiency, too. Trying out new queries on these smaller, domain-specific data models makes it easier to see what could go wrong much earlier in the process. Engineers can fine-tune prompts to get the right results and scale as they find success. It offers a fantastic ROI for highly regulated domains that can’t touch the cloud for answers — due to security purposes — and must host models on their own servers or hardware through edge computing.
AI and ML Trend #2: A growing and dire need for AI and ML talent
In 2018, the World Economic Forum predicted AI and ML roles would be among the top 10 skills needed to fill the future’s workforce demands. However, years have passed, and GenAI is now a very present technology. In addition, current reports show a decline in overall jobs due to outsourcing to automation, with one estimating that 30% of current work hours will be handled by AI in 2030.
So, is there still a massive demand for these roles? Emphatically, yes.
40% of executives claim they can’t fill roles due to a lack of advanced programming and data analysis skills, problem structuring, and complex processing. In addition, automation is not truly replacing people but rather freeing up their time for more impactful work. So, while the GenAI boom may help automate some work, it won’t eliminate the need for specialized AI and ML talent. In fact, that need is only growing.
Why it matters
Companies can’t scale (or do much of anything) without the right people. Just consider the U.S. chip manufacturing workforce, which is 43% smaller than in 2000. Despite the high demand for talent, just 1,000 technicians and 1,500 engineers join the workforce annually.
Organizations must hire, onboard, and train differently to stay ahead of the curve. Trends point to upskilling and even reskilling as two options for employers who want to keep their best and brightest as they enter this new age of technology. With AI and ML doing more with vast amounts of data, organizations will need to keep and recalibrate their creative problem solvers to discern the results and put these analyses into useful, real-world applications.
Where it’s going
Companies should look at both how humans interact on the front end, building models and training AI, and what they do with the outputs. These models can only get better with these talented employees bookending powerful AI platforms in everyday processes. Skilled talent will continue to be in very high demand. Still, as new generations of workers make AI part of their daily routine, they’ll also be more naturally inclined to partner with technology in even the most human and mundane endeavors.
AI and ML Trend #3: A push toward no-code and low-code interfaces
One trend that may solve part of another trend (the workforce dilemma) is the growth of low- and no-code tools. While the typical org may still need data engineering support for the more robust tasks, there’s a future where everyone can code to build powerful predictive models.
Marketers, business insight analysts, and even data hobbyists can assemble their own ML solution in just a few steps using a tool like Pecan or another low-code Predictive GenAI platform. The process varies by use case but would look something like this with Pecan’s platform:
- Connect your data set, which could be industry-specific and targeted (similar to the SML trend discussed above). Pecan accepts data sets as CSV but also supports a library of integrations.
- Create your predictive question using an interview-like chat with an AI copilot based on your unique requirements. Pecan then generates a Predictive Notebook using SQL (data notebook) based on your predictive question in minutes.
- Refine your notebook's SQL queries if needed in order to assemble the right training dataset for your desired model.
- Preview your model. Pecan shows you the final results in an editable form that you can adjust until you get it right or even add detailed attributes and SQL code.
- Train your model and evaluate its results. These two stages are actually one stage since training involves reviewing the output and tweaking repeatedly. With Pecan’s intuitive metrics dashboard, you can see the model in action and decide if additional changes are needed.
Why it matters
Organizations need data to make important business decisions and stay competitive, and harnessing data is part of the modern “gold rush.” Not taking part leaves opportunities on the table. But by having the ability to create your own predictive models, you remain competitive with other companies in your field even without the next great data scientist hire.
Where it’s going
Not all low-code solutions are created equally, and companies should look for those with a focus on security in addition to user-friendliness. Also, as more no- and low-code solutions enter the market, expect the best ones to shine with intuitive customer dashboards and more of the “why” behind their code suggestions, demystifying the “black box” of AI.
Pecan provides the thinking behind each piece of the code, what it does and why it matters, so you can confidently use its suggestions and demonstrate value to stakeholders.
AI and ML Trend #4: AI for good
The battle against some of the world’s most vexing issues has traditionally included many high-tech tools (such as innovative energy sources), so it makes sense that AI has been enlisted to do its part in creating a better world. AI for Good, the organization built to recognize and prioritize such goals, currently tracks several categories of good that AI has a role in assisting. These include health, elimination of poverty, feeding the world, and even gender equity.
Why it matters
This trend isn’t so much the next big thing in AI as it is a natural progression for a tech that has the dual power to be used for great harm or great good. Using AI to create positive change isn’t a controversial aim. However, issues arise when asking, “Whose definition of ‘good’ are we supporting?
For example, the same technology that powers disease reporting and tracking also presents privacy and piracy issues. Innovators must carefully balance good intentions with the risks of such powerful tools. And every tool built for altruism could be tweaked to have the opposite effect. As leaders choose their consultation teams, they must look extra hard for those who truly have humanity’s best interests in mind.
Where it’s going
With the collection of more data and better predictive analytics tools, we can expect further advancements in health, education, and even human rights. However, these projects must get substantial buy-in and publicity to help counter the negativity created when AI goes awry. As long as the conversation about AI dangers continues to dominate newsfeeds and consciences, AI will have a higher chance of being used for the benefit of humanity.
AI and ML Trend #5: The fusion of GenAI with other types of AI
GenAI is making an appearance in virtually every enterprise application, like Adobe, Canva, Grammarly, and more. Why? Because it allows users to interact with their technology applications in a way that simply wasn’t possible before, adding new use cases and possibilities.
Why it matters
GenAI’s integration features have made it easier than ever to add GenAI capabilities into technology. This is an important change for businesses and end users.
For businesses, they’ll need to be increasingly conscientious of how they’re using GenAI in their platform, ensuring there aren’t any hallucinations or risks for customers and that they use GenAI within the scope of what it's capable of (for example, answering questions about a data set rather than building a predictive model).
For end users, this presents a massive opportunity to accomplish more with their favorite applications. Over time, end users will continue to get better at prompting ChatGPT integrations and working alongside these text-based interfaces.
Where it’s going
More and more organizations will have generative AI capabilities within their products and solutions. Those most comfortable and savvy with GenAI will reap the most benefits.
Pecan: Trusted, no matter the trends
Trends come and go, and ML will have its share of winners and losers. But no matter what the future holds, the right technology — like Pecan — can help you create, test, and scale independently without waiting for that “just-right” product to launch.
Pecan AI is a low-code predictive analytics platform that helps data professionals and business users of various skill levels build predictive models. With the help of our GenAI copilot, you can ask a few questions to isolate a business question, choose the right model, and generate an ML model in minutes.
Because we automate the data and machine learning pipeline and many steps of the data science process — including data preparation, feature engineering, and coding — we reduce the time and expertise needed to build and deploy models.
You can experiment and iterate faster and easier than ever. What would have taken three to six months can be finished in weeks or even hours, freeing up time to iterate and experiment.
Our easy-to-use dashboards and AI assistant enable everyone on the team, not just your data scientists and IT users, to experiment with and build models. With Pecan, you don’t need to bring on additional machine learning and data engineers — those team members, or rather the functions they perform, are built right in.
Now, you can harness the power of predictive and generative AI in one platform. Pecan makes building machine-learning models as easy as typing. You just need to be able to ask the right questions.
Embrace the latest AI and ML trends in your team — without hiring specialists. Get in touch to find out how Pecan can make ML models accessible to your team with our intuitive, flexible platform and our expert guidance.