Artificial intelligence (AI) has been described as a set of technologies that gives computers vision and lets them understand written and spoken language; as machines capable of performing functions usually associated with human minds; as the simulation of human intelligence; as an unfathomably large potential boon to human productivity; and as the possible doom of humankind.
In short, AI is at the center of several raging business and societal debates. It arrived there slowly, then suddenly. For more than a century, AI had captured the imaginations of a small group of philosophers and science-fiction fans and, for roughly the past 80 years or so, those of a smaller group of mathematicians and computer scientists. Thanks (mostly) to the efforts of the latter group, AI erupted into the public consciousness during the last six weeks of 2022 and, now, appears to be on the cusp of achieving its long-promised potential to transform nearly every aspect of human life.
For small and medium-sized businesses (SMBs), however, the questions about AI are quite simple: Can AI boost the productivity of our business? Can we use it to operate more efficiently, to grow faster or to be more profitable? And regardless of whether AI can help us do all that, can we afford it?
This article addresses those practical questions with practical answers. It explains the highlights of AI’s history, what it can do for SMBs, how it works and the business benefits it can bring.
What Is Artificial Intelligence?
Though there is still debate about the nature of AI, it is not controversial to say that AI is a set of technologies and practices meant to create computer-based systems that can perform tasks that, before AI, only humans could do.
What these tasks all have in common is that they require making decisions based on information. Therefore, all AI systems include data and algorithms that process and, sometimes, act on the data. Some AI systems, such as generative AI chatbots, are trained on so much data that it becomes a newsworthy talking point. But the multitude of AI systems that are more narrowly focused on individual business tasks need only enough data to address the job at hand — which may still be a large volume of data by everyday standards.
AI algorithms are so numerous and varied that they are tricky to discuss at a high level. But what they all have in common is the ability to recognize patterns in data, make decisions or predictions based on that information and, in many cases, learn from the quality of their decisions. It is how each algorithm works that makes the primary differences among many of the AI technologies and approaches you’ve likely heard of, such as machine learning (ML), deep learning, natural language processing (NLP), computer vision and the neural networks that underlie most of them. These are the technologies that enable machines to understand — and generate — speech, recognize images and make autonomous, problem-solving decisions.
Any discussion about AI systems must also include the human factor. People create and fine-tune the software that embodies the algorithms, design the hardware to run the software, judge the quality of AI systems’ output and, often, provide feedback that helps the system improve.
Weak AI vs. Strong AI
One of the ways that philosophers and computer scientists have long categorized AI is “weak” or “strong.” Strong AI, also called artificial general intelligence (AGI), has been discussed consistently in science fiction and news media, and has garnered the most societal debate since the emergence of ChatGPT in November 2022 — ironic because no instances of AGI are known to exist. Strong AI is still theoretical, with debate in scientific communities about how close to AGI some AI systems may come.
Weak AI, on the other hand, is everywhere. “Weak” refers to the scope of what an AI system can do, not the quality with which it performs its tasks. All AI systems used in businesses today are narrowly focused on specific functions or capabilities, rather than on trying to replicate general human intelligence. Even ChatGPT and similar large-language models (LLMs) are considered weak/narrow, despite their wide-ranging knowledge, because they are limited to language. Their abilities cannot be extrapolated to visual understanding, motor control or complex decision-making, and they don’t learn and adapt the way human intelligence can.
Key Takeaways
- AI can enable computers to learn, solve problems and make decisions.
- Integrating AI capabilities into business technology solutions can help companies automate more operations, gain better insights from their data and improve customer experiences.
- The future of AI is being shaped by several advancements and the increasing importance of human-AI collaboration, both of which present opportunities and challenges for businesses.
- As AI moves forward, businesses that effectively harness its capabilities and adapt to the changing landscape will be better positioned to thrive than those that fall behind.
Artificial Intelligence Explained
Why should businesses, particularly SMBs, care about AI? The reason is because a multitude of those weak — i.e., narrowly focused — AI algorithms are rapidly being embedded in, or added to, practical business applications that can help improve efficiency, reduce costs and drive growth. AI algorithms can help SMBs automate repetitive tasks, gain valuable insights from data and enhance customer experiences in ways that, before AI, were open only to bigger companies with deeper pockets.
For example, AI-powered customer service chatbots and virtual assistants can handle basic customer inquiries 24/7, freeing up human staff to focus on more complex issues. They can also learn from customer interactions to provide more personalized and efficient support over time. (It’s worth noting that many of the existing chatbots we have all likely interacted with — and been frustrated by — are rules-based or far simpler AI systems; the new breed of AI-enhanced customer chatbots emerging now is not so limited.)
In marketing and sales, AI algorithms can analyze customer data to identify patterns and preferences, enabling businesses to create targeted marketing campaigns and personalized product recommendations that can lead to higher conversion rates and increased customer loyalty. And that, of course, means more revenue.
Internally, AI can help companies improve automation in data entry, bookkeeping and inventory management tasks, for example, reducing the risk of human error and saving time. It can also assist with financial management and accounting functions, such as demand forecasting, budgeting and planning, and producing financial statements.
How Does AI Work?
SMBs owners and managers should understand two important aspects about how AI works. One is about how AI works in the big picture — where does its ability to analyze company information and help businesses make better-informed decisions come from? The other is how they are likely to experience AI, in action, in their own organizations.
How AI Works: The Big Picture
AI works by processing data, identifying patterns in the data, using the patterns to make decisions or predictions, getting feedback on the quality of its choices — and iterating that process hundreds, thousands or millions of times. Let’s break that into steps:
- Data input: AI systems generally need training or a combination of careful instruction and data access before they can be released into the world. Either way, that means large datasets. The data can come from a combination of different sources, such as Internet-connected sensors, company databases and user interactions. And it can have multiple modalities, such as text, images, audio and video.
- Initial processing: Whatever its type, real-world data is usually messy. It must be preprocessed to remove irrelevant and redundant information and transformed into a format that the AI system can understand and analyze. This usually involves techniques like data cleaning and normalization.
- Algorithm writing/selection: The heart of any AI system is its algorithms — the mathematical models and instructions that tell the system how to process and learn from the data. Historically, there have been many different types of AI algorithms, but today most AI systems use one or more of the various approaches to ML, including deep learning and neural networks. ML algorithms also include ones that train the AI system (discussed later in this article). Each approach — and, of course, each individual algorithm — has its own strengths and weaknesses that can make it better or worse at any given task.
- Training: This involves feeding the data into the AI system (which may include dozens of algorithms) and allowing it to learn and adjust its internal parameters to better fit the patterns and relationships within the data.
- Testing and validation: After training, the AI system is tested on data it has never seen before to evaluate its performance and accuracy. This helps ensure that the system has not simply memorized the training data but can generalize to new situations. If the system’s performance is not satisfactory, it may need to be retrained with more data or its algorithm may need to be revised.
- Deployment: The AI system can now be deployed into a production environment to make predictions or decisions based on real-world data.
Throughout the entire process, human oversight and intervention are crucial. Data scientists and AI experts are involved in selecting and preparing the data, choosing the appropriate algorithms and fine-tuning the system’s performance. They also monitor the system’s outputs for accuracy.
Bear in mind that all of this is still rapidly evolving. As AI technologies continue to advance, new approaches and architectures are emerging that can handle more complex tasks and larger datasets. For example, the AI “transformer” architecture was introduced in a June 2017 academic paper and, based on that, the generative pretrained transformer (GPT) model was first described a year later in a June 2018 paper. Since then, GPTs have pushed the boundaries of what AI can do in terms of understanding and generating language, making ChatGPT possible only four years and five months later, in November 2022.
Despite these ongoing advancements, AI systems are still limited by the quality of the data they are trained on, the quality of the data they are given to act on in business applications and the potential inherent biases and assumptions built into data and algorithms.
How AI Works for SMBs
In practice, SMBs experience AI through the enhanced capabilities and efficiencies it brings to their everyday tools and processes. AI algorithms are usually integrated into various business applications that organizations are familiar with and, often, already using. Standalone AI software, such as ChatGPT and the similar models that have followed it, are still rare.
For example, AI may already be integrated in relatively subtle ways into software-as-a-service (SaaS) applications, such as customer relationship management (CRM), marketing automation and accounting software, since many SaaS providers have begun to incorporate AI capabilities. A CRM system might use AI algorithms to analyze customer data and provide personalized recommendations for sales and marketing strategies. A business manager simply interacts with the CRM interface, while the AI works in the background to process data and generate insights.
Other products that SMBs may use, or may want to consider using, make more ambitious use of AI capabilities, including recent advances in generative AI. Companies are still figuring out how to incorporate these capabilities, so many different approaches are emerging.
What Can AI Do?
People tend to anthropomorphize — that is, project human attributes onto — just about everything, including AI. So, many people think of AI as “thinking.” But it does no such thing. What does AI actually do? Here are eight key features, which can be incorporated into business applications in ways that lead to potentially significant benefits — and, sometimes, to the uncanny sense that the software can actually think.
- Analyze data: If data is the new oil, then AI can be a major refinery. AI algorithms can process and analyze vast amounts of structured and unstructured data much faster and more accurately than humans. This enables them to discover hidden patterns and trends that can inform business intelligence and company decision-making processes. AI-powered data analysis can be applied to virtually any field.
- Automate processes: From data entry and document processing to inventory management, warehousing and accounting, AI can automate repetitive and time-consuming tasks, freeing up human resources to focus on work that adds significant business value. AI-powered automation can improve efficiency, accuracy and productivity across various industries.
- Detect objects and patterns: Humans are excellent pattern-recognizers, but AI can do it with far more data than we can keep in our heads — and many times faster. AI algorithms can identify and recognize objects, patterns and anomalies across visual, textual and numerical data. AI-powered computer-vision and image-recognition technologies can accurately identify and classify objects, faces and patterns in images and videos, which is why it is so useful in security and surveillance, medical imaging and autonomous vehicle applications. Beyond visual data, AI algorithms can be used in business analytics to detect patterns and anomalies in large datasets. It can help identify fraudulent transactions in financial data, recognize spam or malicious content in emails, and discover trends in customer-behavior data.
- Personalize recommendations: AI can analyze user behavior and/or purchase data and provide personalized recommendations and experiences. This capability is widely used in ecommerce, streaming services and content platforms to improve user engagement and satisfaction.
- Translate languages: AI-powered language translation tools can instantly translate text and speech from one language to another, with increasing accuracy. This can help managers and executives communicate across borders and cultures, whether they are traveling abroad or doing business with international partners.
- Generate text and images: This generative AI capability is what took the world by storm two years ago, when AI tools arrived that can generate human-like text and realistic images based on written prompts and examples. From art and design to content creation and beyond, some business analysts believe generative AI has the potential to add trillions of dollars to the global economy, annually, by increasing knowledge worker productivity. Tools like ChatGPT and Claude for text generation, Perplexity for research and Midjourney and DALL-E for image generation have shown the potential of generative AI.
- Summarize data and text: AI can automatically summarize large volumes of data and text, extracting key points and insights. In a world drowning in information, this capability can be a lifeline. It makes it far easier for researchers and business managers, for example, to cut through the details of large documents or datasets and get to the heart of what matters to their organizations.
- Converse in natural language: AI algorithms can make computer systems understand speech. This capability lies at the core of tools like AI-powered chatbots and virtual assistants. But, over time, all kinds of information technology systems can benefit from integrated algorithms that let people interact with them in natural language.
Types of AI
There are many ways to categorize AI systems. The weak versus strong AI discussion above is one. But AI systems can also be differentiated by training method, capability level and the underlying algorithm’s core approach. Complicating the matter is that over the course of an 80-year history, various avenues of AI research appeared to reach dead ends, only to be “rediscovered” years, or even decades, later — often with different names.
Here is a useful way to categorize AI systems based on the types widely in discussion now:
- Decision trees are a prime example of a pre-deep-learning/neural network approach to AI algorithms. Tree models make decisions based on a series of questions; so-called “random forests” are collections of decision trees working together that are more accurate than individual trees. Symbolic regression, genetic algorithms and Bayesian networks are other examples of different algorithmic approaches to the challenge of building machines that learn. These AI approaches, and many more, continue to operate in products today.
- Machine learning is a subset of AI at the same level as all the approaches in the first category, but it’s the big one — at least for now. ML encompasses a multitude of algorithms and statistical models that enable computers to improve their performance on a task through experience, without being explicitly programmed. The next two categories, below, are specialized applications of ML. And so are many AI systems that are trained to perform narrow, well-defined functions using large, structured datasets (i.e., data that is labeled and usually organized in rows and columns). ML techniques are used in most modern AI systems.
- Deep learning is a subset of ML that can take advantageous use of neural network architectures with more than two “hidden” layers of artificial neurons. Depending on the complexity of the use case, deep learning neural networks can have up to hundreds of layers. Although AI systems using pre-deep-learning approaches, such as decision trees, could perform image recognition and process natural language, newer AI systems using deep-learning ML techniques and running on neural network architectures outperform them. Self-driving cars, however, are an example of a use case that became possible only through deep learning.
- Generative AI uses a subset of deep-learning ML technologies and deep neural networks to construct AI models that, once trained, can rapidly create content in response to text prompts. Different generative AI tools can produce new audio, image and video content, but it is the text-oriented conversational AI of LLMs like ChatGPT that has generated the most excitement. Generative AI models represent a significant advance in AI because they exhibit many AI capabilities that begin to bridge the gap between weak and strong AI systems, including natural language understanding and generation, knowledge synthesis, problem-solving across multiple domains of expertise and complex reasoning. Consequently, people can converse with, and learn from, advanced generative AI models in pretty much the same way they do with humans.
Approaches to AI Model Training
SMBs are unlikely to work directly on training AI models. But it is nonetheless important to understand AI training because, without it, an AI system would be virtually useless. Training is where AI models learn to perform specific tasks by absorbing information from examples — that is, data, and lots of it. The quality and quantity of data used in training, as well as the choice of training approach — different training methods are suited to different types of AI models, tasks and data — significantly affect the resulting AI system’s performance. For these reasons, selecting the appropriate training approach is key to building effective AI applications.
But why choose only one? The most practical applications of ML tend to use a combination of techniques instead of relying on one approach. There are four main approaches to training AI models:
- Supervised learning: In this method, AI systems are trained using labeled data, meaning that both input data and the correct output are provided. The AI learns to map inputs to outputs based on the examples it is shown. Supervised learning is commonly used for tasks such as image classification, sentiment analysis and predictive modeling, where a target variable or outcome to predict is clear.
- Unsupervised learning: AI systems trained with unsupervised learning are given unlabeled data and must identify patterns, structures or relationships on their own. The AI learns to group similar data points together or detect anomalies without being explicitly told what to look for. This approach is often used for tasks like customer segmentation, anomaly detection and data compression, where the goal is to discover hidden data patterns or insights.
- Reinforcement learning: In reinforcement learning, AI systems learn through trial and error, receiving “rewards” or “punishments” based on their actions. The AI learns to make decisions that maximize its cumulative reward over time. One technique, known as reinforcement learning through human feedback (RLFH), has recently come to public prominence because it played a crucial role in the development of the GPT family of models behind ChatGPT. In RLFH, human feedback helps the AI create a reward model that represents human preferences and values, which then become part of the model’s output when it is put to work on its assigned task(s). Reinforcement-learning techniques are commonly used in game-playing AI and robotics applications; recommendation systems, such as those used by Netflix and Amazon; and certain self-driving car technologies.
- Semi-supervised learning: This is a hybrid that combines elements of supervised and unsupervised learning. In this technique, AI systems are trained using a small amount of labeled data along with a larger amount of unlabeled data. The AI learns to generalize from the labeled examples and to leverage the structure in the unlabeled data to improve its performance. This is particularly useful when labeled data is scarce or expensive to obtain, as it allows the AI to learn from a combination of labeled and unlabeled examples. It’s usually used in combination with one or more of the other training models for applications in robotics, text and image classification, recommendation systems, autonomous vehicles and more.
Benefits of AI
The list of benefits that AI systems can bring to a business is long, varied and still growing. Keep in mind that these advantages stem directly from AI’s ability to recognize patterns, make decisions or predictions, and learn by reviewing their performance — all of which humans do as well or better. But because of their computational power, AI systems can reach these conclusions much faster than humans and do so while analyzing many times more data.
- Improves accuracy: AI systems process and analyze vast amounts of data with a high degree of precision, reducing the risk of errors and inconsistencies. For example, AI-powered medical tools can analyze patient data and imaging results to provide more accurate diagnoses, while AI-based fraud detection systems can identify suspicious transactions with greater reliability than human analysts.
- Increases efficiency: By automating repetitive and time-consuming tasks, AI can help businesses and individuals work more efficiently and productively. For instance, AI-based document processing tools can extract relevant information from large volumes of text, saving time and effort.
- Enhances decision-making: AI’s ability to analyze many times more data than people can lead to better-informed decisions. AI systems, particularly in fields like healthcare, finance and logistics, are assisting in decision-making through advanced predictive modeling and data analysis.
- High availability and scalability: These generally are benefits of the underlying IT infrastructure that supports an AI system rather than of the AI itself. Nonetheless, because AI systems for SMBs are almost always implemented as cloud-based software, it is the case that they can operate 24/7 and are always available when needed. This is particularly valuable in industries such as healthcare, where AI embedded in monitoring systems can continuously track patient vital signs and alert medical staff to potential issues, or in customer service, where AI chatbots can provide around-the-clock assistance. On the scalability side, the same cloud infrastructure makes AI systems easy to scale up or down to accommodate changing demands and workloads.
- Personalizes results: AI algorithms can analyze customer data and preferences to provide highly personalized experiences and recommendations. This is evident in applications such as streaming services, where AI algorithms suggest content based on a user’s viewing history, or in ecommerce, where AI-powered product recommendations are tailored to individual shopping behaviors and interests.
- Reduces repetitive tasks: Business applications with embedded AI capabilities can automate mundane and repetitive tasks, allowing humans to focus on more creative or strategic value-added activities. AI-capable apps can handle data entry, invoicing and other routine administrative tasks, while content moderation systems with AI can automatically flag and/or remove inappropriate material from online platforms.
- Converse with humans: NLP capabilities can enable any business system to engage in natural, conversational interactions with workers, providing information, assistance and support. This can improve accessibility and make new forms of human-machine collaboration possible. For example, workers in assembly plants can already get real-time guidance from AI systems via augmented reality headsets, and AI-powered virtual health assistants are interacting with patients via voice and text messages to help them stay on track with their medication regimens.
- Create computer code: Generative AI systems can assist in writing and optimizing software code, boosting developer productivity and reducing errors. AI-powered code completion tools can suggest relevant code snippets and functions as developers type, and AI-based code optimization systems can automatically refactor and streamline existing codebases.
- Accelerates innovation: AI can speed up research and development processes, leading to faster innovation. Because they can process large volumes of scientific data rapidly, AI systems are accelerating discovery in fields from pharmaceuticals and material science to astrophysics. Similarly, AI can accelerate creative processes by collaborating with human workers to generate ideas, designs, music, art and even literature.
- Refines risk mitigation: AI’s predictive capabilities can help identify and mitigate potential risks before they become problems. For example, an AI system could analyze data related to weather, geopolitics and transportation routes to predict potential supply chain disruptions. Business managers could identify alternative suppliers, routes, etc., in advance.
- Optimizes predictive maintenance: In industries that use heavy machinery, such as manufacturing, AI can predict when machinery or equipment will fail or require maintenance, helping to prevent breakdowns before they happen. This reduces downtime and improves operational efficiency.
- Accessibility improvements: AI can make services and information more accessible to people with disabilities. A website’s map of bus routes, for instance, could contain AI-generated text aligned with the images that describes the routes in detail. That could be paired with NLP to read the route descriptions out loud for visually impaired people.
- Better healthcare outcomes: In medicine, AI can lead to earlier disease detection and more personalized treatment plans.
Examples of AI Technologies
AI research and evolution have produced many different, specialized technologies, each with its own distinct applications and potential to reshape industries. The following seven AI technologies are some of the most impactful and widely adopted.
-
Computer Vision
Computer vision enables machines to understand visual information from the world around them. It is used in facial recognition, object detection and image classification applications. In retail stores, for example, it’s used in automated checkout systems and inventory management (for example, by deducting an item from inventory as it is purchased). It’s also a crucial component for autonomous vehicles. In security, it powers surveillance systems; in healthcare, it aids in medical imaging analysis and disease diagnosis.
-
Weather Modeling
AI algorithms analyze vast amounts of meteorological data, including satellite imagery, radar and historical weather patterns, to generate accurate and detailed weather forecasts. Beyond meteorologists, weather models are used by farmers to optimize crop planting and harvesting, by utilities to anticipate energy demand and by emergency services organizations to better prepare for and respond to severe weather events. Some large businesses incorporate weather data into their demand forecasting analyses, a practice likely to become available to SMBs, too, as the technology becomes less expensive and easier to use.
-
Autonomous Vehicles
Self-driving cars, trucks, drones and other autonomous vehicles rely on a combination of AI technologies, including computer vision, sensor fusion (integrating data from multiple sensors to create a more comprehensive and accurate understanding of the surrounding environment) and decision-making algorithms, to navigate roads (and skies) safely without human intervention. This technology is being developed and tested by automotive manufacturers, technology companies and transportation services. The potential benefits include reduced traffic accidents, increased mobility for elderly and disabled individuals and better traffic flow in cities.
-
Fraud Detection
In fraud detection systems, AI algorithms analyze patterns of user behavior to find anomalies that suggest fraudulent activities and then try to prevent the associated action, either on their own or by alerting human agents, depending on the situation. This technology is widely used in banking, insurance and ecommerce. It helps financial institutions protect customers’ assets, reduces business losses due to fraud and helps organizations comply with industry regulations. In ecommerce, AI-powered fraud detection can reduce chargebacks and enhance trust in online transactions.
-
Speech Recognition
Speech recognition technology uses AI algorithms to convert spoken language into written text. It’s often included as an initial step in an NLP application but shouldn’t be confused with that broader technology (see below). To avoid such confusion, it’s often called speech-to-text technology. By either name, it is used in applications like dictation, virtual assistants and customer service automation. In telecommunications, it powers voice-activated dialing and customer support. In healthcare, it enables voice-to-text transcription of medical notes and hands-free documentation. In automobiles, speech recognition is a part of voice-controlled navigation and entertainment systems.
-
Natural Language Processing
NLP is an AI technology that enables computers to understand, interpret and generate human language. While speech recognition focuses on the acoustic-to-text conversion, NLP is meant to achieve — and output — a deeper understanding of language content, regardless of whether it originated as speech or text. It’s used in applications such as sentiment analysis, text summarization and automated translation from one language to another. In marketing, NLP helps companies analyze customer feedback and social media mentions. In media, NLP assists in content recommendation systems and automated news aggregation. In finance, it powers the analysis of financial reports and market sentiment for investment decisions.
-
Virtual Assistants
Virtual assistants are AI-powered software agents that can understand natural language commands and perform tasks on behalf of users. They are fairly sophisticated, able to handle complex, multistep tasks that require contextual understanding, and they can learn from their interactions to improve future performance. They are used in smartphones, smart home devices and enterprise software. In healthcare, for example, they can engage in natural conversations with patients, collect symptom information and assist doctors in making diagnoses. In education, virtual assistants can offer personalized learning experiences and answer student queries. In corporate settings, they can manage schedules, set reminders and control conference systems. Virtual assistants can even personalize their interactions, adapting to individual user preferences over time.
-
Chatbots
Think of chatbots as virtual assistants’ younger cousins. They are simpler AI-based conversational interfaces that can interact with users through text or voice, answering easy questions, providing information and completing tasks. They excel in scenarios with predictable patterns of customer interaction and repetitive tasks. In customer service, for example, chatbots handle routine inquiries, such as checking the status of an order or explaining the differences among standard service plan offerings. In ecommerce, they can guide customers through the purchasing process and present personalized recommendations (that were determined by a separate AI system). Banking chatbots assist with account inquiries and transaction details. Telecommunications companies employ chatbots for troubleshooting common issues, account management and service inquiries.
Generative AI
Generative AI is a groundbreaking subset of AI that can create new, original content rather than only analyze or act on existing data. Unlike other AI technologies designed for specific tasks like image recognition, language translation or decision-making, generative AI can produce virtually any kind of text, image, music, speech or other type of content that is like what a human might create, and do it all in response to text prompts.
At the heart of generative AI are LLMs and deep-learning algorithms that do the heavy lifting of enabling machines to understand and generate human-like content. Generative AI models are trained on really large amounts of data so that they can learn the patterns, styles and structures of different types of content and then use that knowledge to generate new, coherent outputs based on user prompts or other input parameters.
One of the most promising aspects of generative AI is its potential to enhance human productivity and creativity through collaboration — which, according to McKinsey & Company, can increase the global economy to the tune of trillions of dollars per year in the coming years. Rather than replacing human workers, generative AI can serve as a powerful tool to augment and accelerate human capabilities. For example, writers can use generative AI to brainstorm ideas, overcome writer’s block or produce drafts that they can then refine and edit. Designers can leverage generative AI to create multiple variations of a design concept, explore new styles or automate repetitive tasks. Researchers can employ generative AI to summarize large volumes of text, generate hypotheses or identify patterns in complex datasets.
Generative AI technologies like GPT-3.5 (and its successors) for text generation, DALL-E for image creation and WaveNet for audio synthesis have sparked excitement across multiple industries. As these technologies continue to advance and become more accessible, they have the potential to transform the way people work, learn and create. At the same time, however, generative AI has raised questions about intellectual property rights, the role of human creativity and the potential for misuse.
AI Use Cases
AI technologies are being applied across a wide range of industries, sometimes simply enhancing the efficiency of existing operations and other times transforming how businesses operate and the way they deliver value to customers. Here are some of the most prominent AI use cases in eight important industries.
-
Retail
Ever since Amazon pioneered the use of AI-powered recommendation engines roughly 20 years ago to analyze customer data and provide personalized product suggestions, their use has become widespread in the retail industry. The recommendations that come from AI analysis of customers’ purchase history and browsing behavior can not only help increase sales, but customer engagement, too. AI algorithms can also predict future demand for products by analyzing historical sales data, weather patterns and other relevant factors, helping retailers optimize inventory management and reduce waste. Additionally, AI-powered computer-vision technologies enable automated checkout systems that allow customers to shop without waiting in line (theoretically), while also reducing labor costs for retailers.
-
Healthcare
In healthcare, AI systems are powering faster and more accurate diagnoses and treatments. AI algorithms can analyze medical images, such as X-rays and MRIs, to detect abnormalities and assist clinicians in diagnosing diseases. AI can also accelerate new drug discovery by analyzing biomedical data, identifying potential drug candidates and predicting the efficacy and safety of new medications — all of which should reduce the time and cost for pharmaceutical companies to bring new drugs to market. Furthermore, AI can analyze patient data, including genetic information and medical history, to develop personalized treatment plans that can lead to improved patient outcomes.
-
Finance
In the finance sector, AI-powered systems analyze monetary transactions in real time, identifying patterns and anomalies that suggest fraudulent activity and, thus, helping financial institutions prevent losses and protect customers. AI algorithms can also analyze market data, news sentiment and other factors to make split-second trading decisions, optimizing portfolio performance and reducing risk. Likewise, AI is able to analyze alternative data sources, such as social media activity and mobile phone usage, to assess credit applicants’ creditworthiness. Again, this reduces risk for the lenders, but, in this case, it also broadens accessibility to credit markets for people with limited credit histories.
-
Logistics
AI is helping the logistics industry improve efficiency and reduce costs. AI algorithms can analyze traffic patterns, weather conditions and other factors to refine delivery routes, reducing fuel consumption and improving on-time performance. AI can also analyze sensor data from vehicles and equipment to predict when maintenance is needed, reducing downtime and extending asset life. Furthermore, AI-powered robotics and computer-vision technologies can automate warehouse operations, such as picking and packing, improving efficiency and accuracy while reducing labor costs.
-
Media
Media companies are using AI algorithms to analyze user preferences and engagement data and provide personalized content recommendations. This approach has been shown to increase customer retention. AI-powered tools are also assisting news organizations in generating articles, summaries and even videos, enabling them to scale content production and reach new audiences. As it does for other industries, AI can analyze customer data to identify distinct audience segments, which is beneficial for targeted advertising and personalized content delivery.
-
Cybersecurity
AI is playing an increasingly important role in cybersecurity by enabling faster threat detection and response. AI algorithms can analyze network traffic and system logs to identify potential security threats, so that cyber defenses — whether also automated or initiated by human analysts — can respond faster, reducing the risk of a successful data breach. They’re especially useful when it comes to so-called zero-day threats, which, by definition, have never been seen before. AI can also learn normal user behavior patterns and then detect deviations from those norms, which may indicate insider threats or compromised accounts. Furthermore, AI can prioritize and automate the deployment of security patches based on their vulnerability level and potential impact on the business, reducing the window of exposure to the most dangerous cyber threats.
-
Manufacturing
For manufacturers, AI is driving improvements in efficiency, quality and productivity. AI algorithms can analyze sensor data from production equipment to predict when maintenance is needed, reducing unplanned downtime and improving overall equipment effectiveness. AI-powered computer-vision systems can inspect products for defects, improving quality control and reducing the need for manual inspections. AI also can analyze historical production data, sales trends and external factors to predict future demand for products, helping manufacturers strengthen production planning and inventory management.
-
Energy
AI is helping the energy sector optimize operations, reduce costs and improve sustainability. AI can analyze sensor data from power-generation equipment to predict when maintenance is needed, reducing downtime and extending asset life. AI algorithms can also analyze historical consumption data, weather patterns and other relevant factors to predict future energy demand, so that utilities can prepare for the necessary power generation and distribution. Moreover, AI can analyze real-time data from smart meters and other Internet of Things devices to optimize power flow and reduce transmission losses, improving grid efficiency and reliability.
Key Dates in AI Development
Across a half century from the 1950s to the early 2000s, AI made slow and uneven progress. The AI community often made great promises but was unable to deliver, leading to cynicism about the technology. And apparent breakthroughs, like human-competitive chess-playing programs, didn’t generalize well to practical business problems. That said, by 2000 AI had practical value in a wide range of applications. Mostly, these involved training a computer program on a fairly narrow task using what is now considered medium-sized data, such as 10,000 to 100,000 examples. This level of AI is built into many products already in widespread use.
Starting in the early 2000s, one specific approach to AI started to advance rapidly: neural networks. These are multilayered networks of artificial neurons encoded in software. To envision them, imagine the familiar spreadsheet but in three dimensions because the artificial neurons are stacked in layers similar to how real neurons are stacked in the brain. They also mimic the way that the connections between brain neurons have different strengths. Neural networks are a very old technology, dating back to the 1950s; during their rapid progress in the 2000s, some of their practitioners rebranded them as “deep learning.”
Here are the key dates that tell the story of how AI emerged into the world:
- 1943, Artificial neuron: Warren McCulloch and Walter Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity,” introducing the concept of artificial neurons.
- 1950, Turing Test: In his seminal paper “Computing Machinery and Intelligence,” Alan Turing proposes “the imitation game,” which later became known as the Turing Test, as a way to determine whether a machine can think. In the game, a human tries to distinguish between a computer and a human based solely on their responses to questions.
- 1955, AI coined: The term “artificial intelligence” is first used in the title of a grant proposal to support the Dartmouth Summer Research Project on Artificial Intelligence, which took place the following year. The proposal was written by four people who would become legends in computer science and AI: John McCarthy, then an assistant professor of mathematics at Dartmouth; Marvin Minsky, a Harvard math professor; Nathaniel Rochester, head of information research at IBM; and Claude Shannon, the Bell Labs mathematician who founded information theory. The event marked the birth of AI as a field of study and came to be known as “the Dartmouth Conference.”
- 1958, Perceptron: Cornell Aeronautical Laboratory research psychologist Frank Rosenblatt develops the Perceptron, an early artificial neural network capable of learning and recognizing simple patterns. It had a single hidden layer between its input and output layers.
- 1959, MIT AI Project: McCarthy and Minsky, who both moved to the Massachusetts Institute of Technology the prior year, co-found the MIT Artificial Intelligence Project.
- 1960, LISP: McCarthy publishes his design of LISP in the April edition of Communications of the ACM (the Association for Computing Machinery). It quickly becomes the most popular programming language for AI research and applications.
- 1962, Backpropagation: Rosenblatt introduces the concept of “back-propagating error correction.” Decades later, the concept becomes crucial to ML and generative AI.
- 1963, Stanford Artificial Intelligence Lab (SAIL): McCarthy, who moved to Stanford University in 1962, founds SAIL; it is still in operation. Today, MIT and Stanford remain among the top U.S. universities for AI research.
- 1966, ELIZA: MIT computer science professor Joseph Weizenbaum debuts ELIZA, considered to be the first chatbot, which simulates conversations a person might have with a psychotherapist. Although it sounds impressively human, ELIZA is entirely rules-based; it parses a user’s input into keywords and then chooses a matching response from a preprogrammed library.
- 1969, Neural nets fall from favor: Minsky, along with Seymour Papert, both of whom are now co-directors of the MIT Artificial Intelligence Laboratory (a successor to MIT’s AI “Project”), publish Perceptron. The authors argue that neural networks like the Perceptron are a dead end and that the future of AI lies in symbolic systems. This fuels a long-running controversy among AI researchers and discourages research in neural nets.
- 1970, Backpropogation rediscovered: Finnish mathematician and computer scientist Seppo Linnainmaa reintroduces the AI research community to the idea of backpropagation, which he describes as the “reverse mode of automatic differentiation” in his master’s thesis.
- 1974, Expert systems emerge: Standford publishes a paper on Mycin, one of the earliest expert AI systems, which encodes physicians’ knowledge about antimicrobial therapies and recommends treatment for infectious diseases.
- 1978–1986, XCON savings boosts expert systems: XCON (for eXpert CONfigurer), another early AI expert system, is written in 1978 by Carnegie Mellon University professor John P. McDermott. Its goal is to help the university configure Digital Equipment Corporation’s (DEC) VAX computers, which was a tricky challenge in those days. DEC began using the program internally in 1980; by 1986 it was estimated to be saving the company $25 million ($72 million in 2024 dollars) annually, mainly from reduced configuration errors. XCON’s success was one reason expert systems came to dominate AI research in the 1980s.
- 1986, Backpropagation brings back neural nets: The article “Learning representations by back-propagating errors,” by David Rumelhart, Geoffrey Hinton and Ronald Williams, is published in Nature, applying backpropagation algorithms to multilayer neural networks. Over the next decades, the combination of backpropagation and neural nets becomes the basis of machine learning, leading to multiple AI breakthroughs — including generative AI.
- 1980s, RNNs: Recurrent neural networks — a neural net architecture that incorporates feedback mechanisms like those thought to occur in the human brain — emerge. Though the inherent concepts have been around since the early 1900s, the 1980s brought working RNN models that are still in use.
- 1988, Statistical language translation: IBM researchers introduce statistical machine translation (SMT). Based on information theory and using Bayes’ Theorem, SMT makes a major improvement over the rules-based language translators of the time. Although it did not use neural nets, SMT’s use of large datasets and probabilistic algorithms laid a foundation for those techniques to reemerge in modern language translation systems that are built on neural nets.
- 1990s, CNNs: Convolutional neural networks grew out of attempts to refine RNNs. CNNs specialize in grid-like data and are, therefore, great at spatial data representations — they can generate pictures. Today’s popular text-to-image generative AI apps use CNNs as one of multiple neural net models.
- 1997, Deep Blue beats Kasparov: IBM’s Deep Blue chess-playing computer defeats world champion Garry Kasparov in a six-game match, marking a significant milestone in AI’s ability to compete with humans in complex tasks.
- 2005, DARPA Grand Challenge: Stanley, an autonomous vehicle developed by Stanford University’s Racing Team, wins the DARPA Grand Challenge, successfully navigating a 132-mile desert course without human assistance.
- 2011, Watson Wins Jeopardy!: IBM’s Watson defeats two human champions in a televised game of Jeopardy!. Although Watson did not use deep learning and neural networks — it had a one-off software architecture specialized for question-answering — it showcased the power of NLP and led to surging interest in AI.
- 2011, Siri: Apple introduces Siri, a virtual assistant that uses speech recognition and NLP to interact with users and perform simple tasks on iOS devices.
- 2016, AlphaGo: Google DeepMind’s AlphaGo AI system defeats world champion Lee Sedol 4-1 in a five-game match of Go, demonstrating the potential of deep learning and reinforcement learning in tackling complex problems.
- 2017, The Transformer breakthrough: Transformer, a deep-learning neural network architecture whose “self-attention” mechanism eliminates the need for recurrence in neural nets, is introduced in the paper “Attention Is All You Need,” authored by eight employees and former employees of Google Brain and Google Research. The transformer breakthrough is that it can process sequential data, such as text, in a massively parallel fashion without losing its understanding of the meaning in the sequences. The parallel processing of sequential data revolutionized NLP and powered the creation of today’s LLMs.
- 2018, The Generative AI breakthrough: In June 2018, four OpenAI researchers publish “Improving Language Understanding by Generative Pre-Training,” which describes how they combined generative pretraining with a transformer model to create the LLM that is known today as GPT-1.
- 2020, GPT-3: OpenAI releases GPT-3, then the largest and most powerful LLM.
- 2022, ChatGPT: In November, OpenAI launches ChatGPT, based on GPT-3.5. This highly capable conversational AI chatbot captures public attention and sparks discussions about the potential and implications of advanced LLMs and generative AI.
The Future of AI
Unsurprisingly, expectations for the future of AI technology follow the familiar trajectory of all technology breakthroughs: AI is expected to rapidly become more capable, widely accessible and less expensive (for the same capability). In the early days of the PC, journalists called this “bigger, better, faster, stronger, cheaper.” And because AI technology automates the analysis and application of knowledge itself, it could reshape the landscape for businesses of all sizes, including SMBs, by becoming increasingly integrated into virtually every aspect of business operations and decision-making.
Because it automates the analysis and application of knowledge itself, AI integration into business and society at large is also likely to cause transformational change. Any previous change of such magnitude has always faced significant resistance, so early adopters of AI will likewise need to apply change management best practices to be successful. The payoff for early adoption of AI, however, will be worth it, notes Dean Thompson, a former chief technology officer of multiple AI startups that have been acquired over the years by companies including LinkedIn and Yelp, where he remains as a senior software engineer working on LLMs.
“There will be huge advances at the leading edge that the people at the trailing edge won’t be able to make sense of,” Thompson said. “People and businesses at the leading edge will gain disproportionate power. Companies at the trailing edge will fail, which is disruption. And people at the trailing edge will fail, which also is disruption.”
Thompson and other experts expect to see rapid advancement of foundational AI capabilities in the next two to three years, especially in the LLMs that power generative AI. That will translate into much more reliable models that rarely, if ever, hallucinate; that can generate higher-quality content; and that can manage more complex tasks without human intervention.
Another trend many experts are expecting is the rise of multimodal AI — models that can process and generate multiple formats, such as text, images, audio and video. That could obsolete the LLM acronym’s middle letter, which stands for language. At the same time, there is a growing exploration of so-called “small language models” (SLMs). Early SLMs have proved surprisingly capable at much smaller sizes than LLMs, which can have trillions of variable “parameters” (the coefficients of equations in each artificial neuron’s “cell”). SLMs have only a few billion parameters and, therefore, much smaller file sizes. That smaller size enables them to respond faster, run on smaller devices (because they require less computational power) and consume less energy. Finally, they require less training data. If they prove out, these compact models could help make AI more accessible and cost-effective for businesses to implement. As a result, we could see a proliferation of AI applications tailored to the specific needs and constraints of smaller businesses.
Two more expected trends are rapid customization/specialization of AI models and the integration of AI with other vital technologies. On the customization front, models are expected to be increasingly fine-tuned to specific industries, use cases and data sets. Likewise, companies are expected to customize models for their own specific circumstances, data and business needs. On the integration front, AI combined with technologies such as the Internet of Things (IoT), blockchain and edge computing may present opportunities for businesses to unlock new approaches to applications.
Yet another trend, for which AI observers have coined the term “agentic AI,” is the development of AI systems that are smart enough to function more proactively, without human intervention. Imagine an inventory management system that incorporates advanced agentic AI capabilities. Business managers would set the system’s goals, objectives and specific operational parameters, but the AI system would then have the agency to act on its own analysis and related decisions to optimize inventory operations for the organization.
But advancements like agentic AI will require significant improvement in the refined kind of decision-making termed “executive functioning” in humans. Thompson believes this will improve relatively slowly. Instead, the next few years are more likely to see more significant success in the development of narrower AI applications.
Thompson also anticipates an uneven distribution of AI adoption and human-AI collaboration at the core of the most successful AI implementations.
“AI adoption will be wildly uneven across industries and organizations,” Thompson said, due to the inherent difficulties involved in managing the human and organizational changes required to adopt AI successfully. “This uneven distribution of AI capabilities could lead to significant disruptions and power imbalances.”
For those that do adopt, Thompson believes that focusing on collaboration between humans and AI systems will be a crucial success factor. “The teams and organizations that learn to work effectively with AI will see significant productivity gains and competitive advantages,” he said.
Exceed Your Productivity Goals With NetSuite AI
Businesses navigating the rapidly evolving landscape of AI can consider NetSuite’s cloud-based enterprise resource planning (ERP) system, which offers a powerful set of AI capabilities that can help organizations of all sizes boost their productivity and gain a competitive edge. NetSuite’s embedded AI capabilities can help businesses automate a wide variety of repetitive tasks and gain more valuable insights from their data than previously possible, which equips business managers to make better-informed decisions. NetSuite does this by adding AI functions to its proven, user-friendly ERP with a unified central database.
NetSuite’s AI capabilities span business functions from invoice processing to financial management. For example, NetSuite’s intelligent financial management tools can scan invoices using AI-based object and character recognition, automatically categorize expenses, continuously analyze financial data to detect anomalies and recommend next steps, and provide predictive insights into cash flow and budgeting. NetSuite’s AI-powered demand forecasting and inventory management features can help businesses optimize their stock levels, reduce waste and improve order fulfillment. By integrating these AI capabilities into a comprehensive ERP solution, NetSuite enables businesses to streamline their operations, enhance their agility and unlock new opportunities for innovation and growth, all while keeping pace with the rapid advancements in AI technology.
AI is a transformative technology that is rapidly reshaping the business landscape. AI’s ability to analyze data, automate processes and generate insights can help businesses improve efficiency and drive innovation and growth. As AI rapidly evolves, it’s essential for organizations to stay up-to-date about its potential benefits — and challenges — so they can develop strategies to integrate AI into their planning and operations.
#1 Cloud ERP
Software
Artificial Intelligence FAQs
What is AI mainly used for?
AI is mainly used for automating tasks, analyzing data and making predictions to help businesses and individuals make better decisions and solve complex problems.
What is the purpose of AI?
AI research and products aim to create intelligent machines that can perform tasks that typically require human intelligence, such as learning, problem-solving and decision-making.
Where is AI used today?
AI is used in a wide range of industries today, including healthcare (diagnosis and drug discovery), finance (fraud detection and algorithmic trading), transportation (autonomous vehicles), manufacturing (predictive maintenance), customer service (chatbots) and more.
Can AI replace humans?
While AI can automate many tasks and augment human capabilities, it is unlikely to replace humans. AI is best suited for specific, well-defined tasks, while humans excel at creativity, empathy and general intelligence. The most successful applications of AI are likely to involve collaboration between humans and machines.
What are people using AI for?
People are using AI for a wide range of purposes. These include automating repetitive tasks and processes, analyzing large volumes of data to identify patterns and insights, making predictions and forecasts based on historical data, personalizing experiences and recommendations for customers, improving decision-making and problem-solving, enhancing creativity and generating new ideas in fields like art and design, augmenting human capabilities and improving workplace productivity.