🌐
Google Cloud
cloud.google.com › learn › artificial-intelligence-vs-machine-learning
AI vs. Machine Learning: How Do They Differ? | Google Cloud
Artificial intelligence (AI) and machine learning (ML) are used interchangeably, but they differ with uses, data sets, and more.
🌐
AWS
aws.amazon.com › what is cloud computing? › cloud computing concepts hub › machine learning › what’s the difference between machine learning and deep learning?
Deep Learning vs Machine Learning - Difference Between Data Technologies - AWS
1 week ago - As ML and deep learning solutions ... system improves by using it as a data point for training. Traditional machine learning (ML) requires significant human interaction via feature engineering to produce results....
Discussions

A "Traditional" algorithm vs. Machine Learning
The whole thing feels dishonest and misleading. Welcome to the industry :p Here are some of my thoughts: "Machine learning" and "data science" are both incredibly poorly defined, very broad terms meaning often depends a lot on the context in which these terms are being used. I'm personally of the opinion that a better phrase for most of what we call "Machine learning" is "statistical learning" and subsumes basically all of predictive analytics. In other words, if you're model is just a simple linear regression: I still think it's ok to call that "machine learning", although I think it's also fair to maybe characterize that as somewhat dishonest since the phrase obviously conveys a notion of technical sophistication. increasingly, the phrase "Machine Learning" is becoming analogous with "Deep Learning," but this absolutely is not correct. The vast majority of applied ML in industry uses techniques like GLMs and tree ensembles. Ultimately, this is all marketing speak anyway and people will use the language that maximizes hype for whatever they're trying to pitch to you. Speaking from my own diverse experience as someone who's worked as a data professional in a wide diversity of organizations (including two FAANGS), you'd be surprised how unsophisticated a lot of real world "data science" is. Most, even. most data scientist roles are at organizations that are undergoing a "data transformation" to become more "data driven". This is code for "historically, we made most of our business decisions based on intuition and maybe sticking a finger in the wind. the consequence of this is that data scientists get buried in "low hanging fruit" opportunities because there's so much opportunity for low-effort improvements by minimally attending to data, the data teams and the organization at large are heavily incentivized to leverage simple, unsophisticated solutions so they can tackle more problems quickly rather than heavily optimizing solutions to narrow problems. the value of sophistication is really a function of scale when I talk about "low hanging fruit", I'm talking along the lines of: domain expertise gets you say 50% of the available value for some opportunity, data-informed busienss rules takes that up 70%, simple modeling takes that up to 80-85%. So by tackling the low hanging fruit, you've captured close to 70% of the available additional value with very little effort because that last 15% optimization isn't low hanging fruit, we're going to quickly encounter diminishing returns. Every additional percentage point of optimization is going to come with exponentially more effort. This is the reason the bulk of data scientists are employed by huge companies like FAANGs: the scale of their business is large enough that an incremental improvement of a fraction of a percent can mean millions of dollars in revenue or savings. conversely, if your organization is not operating at that scale, it's not unlikely that it'll will cost your company more to invest in optimizing some solution than the value they would get from that solution. And even after putting in that investment, it's still a huge risk. Every application of predictive analytics is essentially a kind of experiment, and with every experiment, there's a possibility that the tested hypothesis is wrong and will be rejected, i.e. the model doesn't do anything of value. When data scientists are given the freedom to do their best work, they are and need to be a huge cost center for whatever organization they operate in. Otherwise, you're asking them to essentially be be data-savvy business consultants who are perpetually chasing low hanging fruit, which is exactly the position most industry data scientists find themselves in. Because of these scaling effects, most of the work that actually does require sophistication will end up getting subsumed by engineering teams if you are a data scientist working in isolation, your ability to operate on large data sets and deploy complex models is limited. This type of work requires engineering support, which means the further away your data scientist is in the org tree from the closest engineer, the more their hands will be tied with respect to the amount of sophistication they can apply to anything that will be deployed. deep learning and data science tools are rapidly becoming staples of undergraduate CS curricula, which means more engineers are equipped to identify and act on opportunities to apply ML without engaging with a data scientist. this creates a kind of feedback loop that further isolates data scientists from the engineering resources they need, often relegating them to being a kind of "ad hoc analytics monkey" for leadership. the "value function" the solution here is optimizing is probably more multi-faceted than you realize specifically, even if your data scientist has all of the engineering resources they could want to deploy the most sophisticated SOTA solution to your orgs problem, there might be good reasons why they wouldn't want to. the ultimate goal of these sorts of projects is almost always to drive some kind of behavior. This often means that it's more important for the outputs of a model to be interpretable than predictively accurate. additionally, the data scientist is ultimately subject to the demands of their customer: the business stakeholder. This unfortunately means that sometimes they will be relegated to approaches whose mechanism can be understood by the stakeholder, especially if it's a new relationship and the data scientist is still building trust in the org. It can even mean the scientist will be required by their client to incorporate features in the model that don't carry any predictive signal at all. This creates an even heavier bias away from sophistication than the whole "low hanging fruit" or "I'm my own data engineer" thing. First and foremost, the data scientist is doing work for their customer and they need to make their customer happy as best they can. TL;DR: From what you've shared, I see nothing inappropriate about describing this work at least as "data science". Calling it "ML" carries a weak implication that deep learning or something similarly sophisticated is being used, but even if that's not the case: the fact that they're forming predictions of any kind by performing computations on historical data I think makes it appropriate. More on reddit.com
🌐 r/MLQuestions
10
19
July 1, 2022
Difference Between Classical Programming and Machine Learning
ML programs are fitting parameters of a model to make a generic thing do a specific thing. "Classical" programs are just programmed specifically to do the specific thing. Take something simple enough to do it easily either way: compute exclusive or. Here’s a classical version. bool xor(bool a, bool b) { if((a && !b) || (!a && b)) { return true; } return false; } You can also train a neural network to do this. I’m not going to write all that code, but I’ll explain the concept. A neural network has nodes and edges. You have one node for each input (here I have two inputs, an and b). There are additional nodes downstream from the input layer connected to the input nodes by edges, and edges have weights. To compute the output, you feed each input to one of the input nodes, multiply the input by the weight of each edge coming out of that node, and then sum up all those multiplications and apply some threshold, and that gives you a computed value at each node. You keep doing that through all the connections u til you get to the last node in the network, and it spits out your answer. There’s a lot going on that I glossed over. Here is a more detailed explanation. https://towardsdatascience.com/how-neural-networks-solve-the-xor-problem-59763136bdd7 I said this can solve the xor problem…how? Well, let’s let 1 be true and -1 be false. I feed my inputs (an and b) into those input nodes, do all my multiplications and additions and thresholds, and if my last node outputs 1 or -1, that’s my answer. But will it compute the right thing in all cases? To make sure it does, I train it. I give it examples with the correct answers, and let it calculate. If it’s answer doesn’t match the right answer, I change the weights on those edges in particular ways that eventually make the errors go away. The ML program is the program that changes the weights to make errors go away. I as the programmer am not thinking about exclusive or. I’m just thinking about training data and errors. If you know how to just write the program, it would be silly to use ML. It’s way harder and slower to like determine if an array is sorted by training some ML method to try and fit parameters to a model than to just write the code to check it. You use ML when you don’t know what else to do. Suppose I give you a bitmap and ask you to compute whether it contains a picture of a squirrel. bool has_squirrel(bitmap b) { for(int row=0; row More on reddit.com
🌐 r/AskComputerScience
16
8
December 29, 2023
[RANT] Traditional ML is dead and I’m pissed about it
this is pretty much how tech has always worked, and i say this as someone with more than a decade in dev/ml engineering. there is always churn in skills and massive hype cycles, gotta get used to this. anyways, the fundementals are not wasted. understanding backprop and gradient descent means you'll actually grok why fine-tuning works and when it'll fail spectacularly. the people who are capable of doing api calls only are gonna hit walls you wont. also hot take: we're in peak hype cycle right now. half these genai internships are gonna be building things that get quietly sunset in 18 months when someone realizes their "ai-powered solution" could've been three if statements. a lot of execs and hiring managers right now are incentivzed to get to market "ai-powered solutions" traditional ml isn't dead, it's just not sexy rn. computer vision, fraud detection, recommendation systems, demand forecasting, anomaly detection all still running on "boring" ml at massive scale. those jobs exist, they're just not flooding linkedin because they aint the hot new thing. the real skill is learning to surf hype cycles without drowning in them. pick up the genai stuff (it's legitimately useful), but don't burn your fundamentals notes. More on reddit.com
🌐 r/learnmachinelearning
359
2038
December 11, 2025
is traditional ml dead? : r/learnmachinelearning
🌐 r/learnmachinelearning
People also ask

What is the difference between classification and regression in supervised machine learning?
In classification, the goal is to assign input data to specific, predefined categories. The output in classification is typically a label or a class from a set of predefined options. · In regression, the goal is to establish a relationship between input variables and the output. The output in regression is a real-valued number that can vary within a range. · In both supervised learning approaches the goal is to find patterns or relationships in the input data so we can accurately predict the desired outcomes. The difference is that classification predicts categorical classes (like spam), while
🌐
scribbr.com
scribbr.com › home › what is the difference between machine learning and traditional programming?
What is the difference between machine learning and traditional ...
What is an example of a machine learning application in real life?
A real-life application of machine learning is an email spam filter. To create such a filter, we would collect data consisting of various email messages and features (subject line, sender information, etc.) which we would label as spam or not spam. We would then train the model to recognize which features are associated with spam emails. In this way, the ML model would be able to classify any incoming emails as either unwanted or legitimate.
🌐
scribbr.com
scribbr.com › home › what is the difference between machine learning and traditional programming?
What is the difference between machine learning and traditional ...
When should I use supervised learning?
Supervised learning should be used when your dataset consists of labeled data and your goal is to predict or classify new, unseen data based on the patterns learned from the labeled examples. · Tasks like image classification, sentiment analysis, and predictive modeling are common in supervised learning.
🌐
scribbr.com
scribbr.com › home › what is the difference between machine learning and traditional programming?
What is the difference between machine learning and traditional ...
🌐
Scribbr
scribbr.com › home › what is the difference between machine learning and traditional programming?
What is the difference between machine learning and traditional programming?
June 27, 2023 - Traditional programming and machine learning are essentially different approaches to problem-solving, but machine learning is automated.
scientific study of algorithms and statistical models that computer systems use to perform tasks without explicit instructions
How is machine learning used in chemical synthesis?
supervised and unsupervised learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus … Wikipedia
🌐
Wikipedia
en.wikipedia.org › wiki › Machine_learning
Machine learning - Wikipedia
5 days ago - From a theoretical viewpoint, probably ... describing machine learning. Most traditional machine learning and deep learning algorithms can be described as empirical risk minimisation under this framework....
🌐
IBM
ibm.com › think › topics › machine-learning
What is Machine Learning? | IBM
3 days ago - Consider a maze: a policy-based agent might learn “at this corner, turn left,” while a value-based agent learns a score for each position and simply moves to an adjacent position with a better score. Hybrid approaches, such as actor-critic methods, learn a value function that’s then used to optimize a policy. In deep reinforcement learning, the policy is represented as a neural network. Deep learning employs artificial neural networks with many layers—hence “deep”—rather than the explicitly designed algorithms of traditional machine learning.
🌐
Databricks
databricks.com › blog › ai-vs-machine-learning
AI vs. Machine Learning: Understanding the Differences and Real-World Applications | Databricks Blog
2 weeks ago - The model learns patterns from ... limited supervision with pattern discovery. Traditional machine learning and modern approaches differ primarily in how they handle features, i.e....
Find elsewhere
🌐
GeeksforGeeks
geeksforgeeks.org › machine learning › traditional-programming-vs-machine-learning
Traditional Programming vs Machine Learning - GeeksforGeeks
July 14, 2025 - In contrast, machine learning enables computers to learn patterns from data and make decisions or predictions, allowing them to handle complex or evolving tasks where manual rule writing is impractical.
🌐
Reddit
reddit.com › r/mlquestions › a "traditional" algorithm vs. machine learning
r/MLQuestions on Reddit: A "Traditional" algorithm vs. Machine Learning
July 1, 2022 -

My Question is: What distinguishes a traditional algorithm from machine learning?

Apologies for the wall of text.

I manage a product with a massive amount of data (1m+ weekly users, 50+ demographic datapoints on each user + user history as well as their interactions with hundreds of customers). At the core of the product is an algorithm that takes a number of inputs (based on trailing historical data) to predict the revenue-optimizing decision.

Recently, our new leadership has begun to call this Data Science and touts this as "Machine Learning". I'm proud of what we've put together and the impact its had on the business, but this feels like the wrong characterization of what is just a semi-complex algorithm with almost all of the calculations occurring in SQL.

This has become a sort of big issue as they've asked me to speak to our "Machine Learning" implementation to customers, investors, and others. I dodged that characterization by instead calling it a "model" or "algorithm" and they took notice and have asked me to embrace the term and update our materials (presentations, roadmap items, etc). Compounding this, they've hired a data scientist who concurs with them that we're using a "predictive machine learning" model. I'm skeptical of his expertise and feel like he should be making an effort to create an actual ML model we can compare against our current model.

The whole thing feels dishonest and misleading. Machine learning feels far outside my depth: I couldn't hold a conversation about it and I have no real clue what a decision forest, neural network, tensors, gradients, or any of the other machine learning terms I see across this sub or elsewhere mean. More details specific to my situation below:

------------------------------------------------------------

The core goal of our data effort is: Based on what we know about a user and what we know about a customer and their provided estimates, what's the optimal revenue-maximizing decision?

There's many calculations that are factored in to accomplish this, for example:

  • We calculate the median average deviation of a customer's proposed vs actual success rate on a rolling basis.

  • We segment our users based on demographic (age/gender/etc) and calculate their success rate relative to the population's average for a success coefficient based on a rolling basis.

  • We run a simple regression between user characteristics and historical success rates for each customer.

  • We factor in historical reconciliation rates from the customer (% of successes that are ultimately rejected by the customer at invoicing) to discount revenue estimations.

  • We determine whether the user's experience should be optimized using a revenue-per-minute or revenue-per-opportunity approach. If we expect them to make a limited number of attempts, we maximum the expected revenue of each interaction. If we expect them to make a larger number of attempts, we optimize for potential revenue per minute. (EPC vs EPM for those in the advertising space)

It gets pretty gnarly, but what we end up with is a huge number of coefficients that inform our user to opportunity matching logic. An example of how this could result in different opportunity rankings for a pair of users could be:

User 1 - Average Attempts per Session 2.1 ( to be ranked by Expected Revenue)

  1. Project A - Potential Revenue $10 | Expected Revenue $2 | Estimated Success Rate 20% | 30 Minutes | Expected Earnings Per Minute $0.06

  2. Project B - Potential Revenue $25 | Expected Revenue $1 | Estimated Success Rate 4% | 10 Minutes | Expected Earnings Per Minute $0.10

  3. Project C - Potential Revenue $1 | Expected Revenue $0.80 | Estimated Success Rate 80% | 5 Minutes | Expected Earnings Per Minute $0.16

  4. Project E - Potential Revenue $10 | Expected Revenue $0.6 | Estimated Success Rate 6% | 4 Minutes | Expected Earnings Per Minute $0.15

User 1 - Average Attempts per Session 6.3 ( to be ranked by Expected Earnings Per Minute)

  1. Project C - Potential Revenue $1 | Expected Revenue $0.90 | Estimated Success Rate 100% |5 Minutes | Expected Earnings Per Minute $0.18

  2. Project D - Potential Revenue $4 | Expected Revenue $0.75 | Estimated Success Rate 18% | 7 Minutes | Expected Earnings Per Minute $0.15

  3. Project E - Potential Revenue $10 | Expected Revenue $0.5 | Estimated Success Rate 5% | 4 Minutes | Expected Earnings Per Minute $0.125

  4. Project B - Potential Revenue $25 | Expected Revenue $0.75 | Estimated Success Rate 3% | 10 Minutes | Expected Earnings Per Minute $0.075

Top answer
1 of 6
15
The whole thing feels dishonest and misleading. Welcome to the industry :p Here are some of my thoughts: "Machine learning" and "data science" are both incredibly poorly defined, very broad terms meaning often depends a lot on the context in which these terms are being used. I'm personally of the opinion that a better phrase for most of what we call "Machine learning" is "statistical learning" and subsumes basically all of predictive analytics. In other words, if you're model is just a simple linear regression: I still think it's ok to call that "machine learning", although I think it's also fair to maybe characterize that as somewhat dishonest since the phrase obviously conveys a notion of technical sophistication. increasingly, the phrase "Machine Learning" is becoming analogous with "Deep Learning," but this absolutely is not correct. The vast majority of applied ML in industry uses techniques like GLMs and tree ensembles. Ultimately, this is all marketing speak anyway and people will use the language that maximizes hype for whatever they're trying to pitch to you. Speaking from my own diverse experience as someone who's worked as a data professional in a wide diversity of organizations (including two FAANGS), you'd be surprised how unsophisticated a lot of real world "data science" is. Most, even. most data scientist roles are at organizations that are undergoing a "data transformation" to become more "data driven". This is code for "historically, we made most of our business decisions based on intuition and maybe sticking a finger in the wind. the consequence of this is that data scientists get buried in "low hanging fruit" opportunities because there's so much opportunity for low-effort improvements by minimally attending to data, the data teams and the organization at large are heavily incentivized to leverage simple, unsophisticated solutions so they can tackle more problems quickly rather than heavily optimizing solutions to narrow problems. the value of sophistication is really a function of scale when I talk about "low hanging fruit", I'm talking along the lines of: domain expertise gets you say 50% of the available value for some opportunity, data-informed busienss rules takes that up 70%, simple modeling takes that up to 80-85%. So by tackling the low hanging fruit, you've captured close to 70% of the available additional value with very little effort because that last 15% optimization isn't low hanging fruit, we're going to quickly encounter diminishing returns. Every additional percentage point of optimization is going to come with exponentially more effort. This is the reason the bulk of data scientists are employed by huge companies like FAANGs: the scale of their business is large enough that an incremental improvement of a fraction of a percent can mean millions of dollars in revenue or savings. conversely, if your organization is not operating at that scale, it's not unlikely that it'll will cost your company more to invest in optimizing some solution than the value they would get from that solution. And even after putting in that investment, it's still a huge risk. Every application of predictive analytics is essentially a kind of experiment, and with every experiment, there's a possibility that the tested hypothesis is wrong and will be rejected, i.e. the model doesn't do anything of value. When data scientists are given the freedom to do their best work, they are and need to be a huge cost center for whatever organization they operate in. Otherwise, you're asking them to essentially be be data-savvy business consultants who are perpetually chasing low hanging fruit, which is exactly the position most industry data scientists find themselves in. Because of these scaling effects, most of the work that actually does require sophistication will end up getting subsumed by engineering teams if you are a data scientist working in isolation, your ability to operate on large data sets and deploy complex models is limited. This type of work requires engineering support, which means the further away your data scientist is in the org tree from the closest engineer, the more their hands will be tied with respect to the amount of sophistication they can apply to anything that will be deployed. deep learning and data science tools are rapidly becoming staples of undergraduate CS curricula, which means more engineers are equipped to identify and act on opportunities to apply ML without engaging with a data scientist. this creates a kind of feedback loop that further isolates data scientists from the engineering resources they need, often relegating them to being a kind of "ad hoc analytics monkey" for leadership. the "value function" the solution here is optimizing is probably more multi-faceted than you realize specifically, even if your data scientist has all of the engineering resources they could want to deploy the most sophisticated SOTA solution to your orgs problem, there might be good reasons why they wouldn't want to. the ultimate goal of these sorts of projects is almost always to drive some kind of behavior. This often means that it's more important for the outputs of a model to be interpretable than predictively accurate. additionally, the data scientist is ultimately subject to the demands of their customer: the business stakeholder. This unfortunately means that sometimes they will be relegated to approaches whose mechanism can be understood by the stakeholder, especially if it's a new relationship and the data scientist is still building trust in the org. It can even mean the scientist will be required by their client to incorporate features in the model that don't carry any predictive signal at all. This creates an even heavier bias away from sophistication than the whole "low hanging fruit" or "I'm my own data engineer" thing. First and foremost, the data scientist is doing work for their customer and they need to make their customer happy as best they can. TL;DR: From what you've shared, I see nothing inappropriate about describing this work at least as "data science". Calling it "ML" carries a weak implication that deep learning or something similarly sophisticated is being used, but even if that's not the case: the fact that they're forming predictions of any kind by performing computations on historical data I think makes it appropriate.
2 of 6
2
What distinguishes a traditional algorithm from machine learning? Traditional algorithm: All of reward is processed by the human engineer. Machine learning: The engineer delegates processing some of the reward to the machine itself.
🌐
Medium
medium.com › the-modern-scientist › traditional-ai-vs-supervised-machine-learning-vs-deep-learning-how-to-pick-f2017b0fd1d7
Traditional AI vs Supervised Machine Learning vs Deep Learning- How to Pick | by Devansh | The Modern Scientist | Medium
January 19, 2024 - Traditional AI- The most secure, understandable, and performant. However, Good implementations of traditional AI require that we define the rules behind the system, which makes it unfeasible for many of the use cases that the other 2 techniques thrive on. Supervised Machine Learning- Middle of the road b/w AI and Deep Learning.
🌐
Quixl
quixl.ai › home › deep learning vs. traditional machine learning: choosing the right approach for edtech applications
Deep Learning vs. Traditional Machine Learning: Choosing the Right Approach for EdTech Applications
April 12, 2024 - At the core of the modern AI ... While both stem from the same goal of making computers think, they are fundamentally different in execution and application....
🌐
Insightsoftware
insightsoftware.com › blog › machine-learning-vs-traditional-programming
Traditional Programming vs Machine Learning
May 2, 2025 - It requires a deep understanding of the problem and a clear way to encode the solution in a programming language. Machine Learning: In machine learning, instead of writing explicit rules, a programmer trains a model using a large dataset.
🌐
Avenga
avenga.com › blog › machine learning vs traditional programming
Machine Learning Vs Traditional Programming - Avenga
May 15, 2025 - While with a subset of Artificial Intelligence (AI), Machine Learning is motivated by human learning behavior; we just show examples and let the machine figure out how to solve the problem by itself.
🌐
Institute Data
institutedata.com › us › blog › machine-learning-vs-traditional-programming-choosing-the-right-approach-for-your-projects
Machine Learning vs Traditional Programming: Choosing the Right Approach for Your Projects | Institute of Data
May 23, 2023 - A certain level of computer intelligence comes into play with machine learning models since algorithms can learn from their environment and input data to improve with time. On the other hand, traditional programming systems depend entirely on user input to determine the solution’s output.
🌐
P.M.F. srl
pmf-research.eu › home › deep learning: differences with machine learning and traditional ai
Deep learning vs machine learning and traditional AI
December 10, 2024 - We can summarise them as follows: traditional AI is based on logical and symbolic rules defined by experts. It’s effective for solving well-defined and structured problems, but has limitations when it comes to handling complex, variable and ...
🌐
iSchool
ischool.syracuse.edu › home › articles
Deep Learning vs Machine Learning: Key Differences
September 22, 2025 - Machine learning (ML) is a subset of AI that is primarily focused on enabling computers to learn from data with minimal human intervention. In traditional programming, a human writes explicit rules for the computer to follow, but in machine learning, the computer learns the rules from examples.
🌐
Reddit
reddit.com › r/askcomputerscience › difference between classical programming and machine learning
r/AskComputerScience on Reddit: Difference Between Classical Programming and Machine Learning
December 29, 2023 -

I'm having trouble differentiating between machine learning and classical programming. The difference which I've heard is that machine learning is the ability for a computer to learn without being specifically programmed. However, machine learning programs are coded, from what I understand, just like any other program. A machine learning program, just like a classical one, takes a user's input, manipulates it in some way, and then gives an output. The only difference I see is that ML uses more statistics to manipulate data that a classical program, but in both cases data is being manipulated.

From what I understand, an ML program will take examples of data, say pictures of different animals, and can be trained to recognize dogs. It tries to figure out similarities between the pictures. Each time the program is fed a new animal photo, that new photo becomes part of the data, and with each new photo, the program gets stronger and stronger and recognizing dogs since it has more and more examples. Classical programs are also updated when a user enters new data. For example, a variable might keep track of a users score, and that variable keeps getting updated when the users gains more points.

Please let me know what I am missing about what the real difference is between ML programs and classical ones.

Thanks

Top answer
1 of 7
6
ML programs are fitting parameters of a model to make a generic thing do a specific thing. "Classical" programs are just programmed specifically to do the specific thing. Take something simple enough to do it easily either way: compute exclusive or. Here’s a classical version. bool xor(bool a, bool b) { if((a && !b) || (!a && b)) { return true; } return false; } You can also train a neural network to do this. I’m not going to write all that code, but I’ll explain the concept. A neural network has nodes and edges. You have one node for each input (here I have two inputs, an and b). There are additional nodes downstream from the input layer connected to the input nodes by edges, and edges have weights. To compute the output, you feed each input to one of the input nodes, multiply the input by the weight of each edge coming out of that node, and then sum up all those multiplications and apply some threshold, and that gives you a computed value at each node. You keep doing that through all the connections u til you get to the last node in the network, and it spits out your answer. There’s a lot going on that I glossed over. Here is a more detailed explanation. https://towardsdatascience.com/how-neural-networks-solve-the-xor-problem-59763136bdd7 I said this can solve the xor problem…how? Well, let’s let 1 be true and -1 be false. I feed my inputs (an and b) into those input nodes, do all my multiplications and additions and thresholds, and if my last node outputs 1 or -1, that’s my answer. But will it compute the right thing in all cases? To make sure it does, I train it. I give it examples with the correct answers, and let it calculate. If it’s answer doesn’t match the right answer, I change the weights on those edges in particular ways that eventually make the errors go away. The ML program is the program that changes the weights to make errors go away. I as the programmer am not thinking about exclusive or. I’m just thinking about training data and errors. If you know how to just write the program, it would be silly to use ML. It’s way harder and slower to like determine if an array is sorted by training some ML method to try and fit parameters to a model than to just write the code to check it. You use ML when you don’t know what else to do. Suppose I give you a bitmap and ask you to compute whether it contains a picture of a squirrel. bool has_squirrel(bitmap b) { for(int row=0; row
2 of 7
3
The difference does not lie fundamentally at the code level, but more at the beahvioural level. A machine learning program implements a machine learning algorithm. A machine learning algorithm is designed to calculate the value of a set of parameters based on the values contained in some dataset. Viewed at this level, it is not really different from a classical program. Because both run on deterministic program code. The difference is what the values calculated for the variables will be used for. The reason the ML model it is calculating those parameters, is because they will define the behaviour of some kind of model. That model can be thought of as a rule or set of rules that define how the world of some AI agent works. The model could be as simple as a straight line correlation (linear regression), it could be a form of clustering (e.g. k-means) or it could be something more complicated like a neural network that classifies images. Whether it's simple or complex, however, they are all defined by a set of numerical parameters. The linear regression model is defined by the gradient and y-intercept of the line. The k-means clustering model is defined by the coordinates of the cluster centroids. The neural network is defined by the weight attached to the edges in the neural network that define how much impact the output of one neuron has on the connected neuron in the next layer of the network. It's all a set of numbers, deterministically calculated on the basis of the numbers contained in a set of data (all data boils down to numbers even the ones we interpret as images or words). This is what we mean by an ML algorithm 'learning' - it is calculating the parameters that define a pattern in the data that you have programmed it to calculate. Whether that's text, images, video, sound, or just plain old numbers, the difference is in the complexity of the model and how the parameters are used, but the computer is still just calculating and executing program code. The behaviour this enables, then, takes on the appearance of learning. You provide a set of data to the ML program and it 'learns' the patterns that define that data, allowing it to make predictions about new unseen data. It appears to take in one set of information, learn from it, and draw conclusions about new information of the same type. So the difference is not at the nuts and bolts level of a program calculating the value for a variable, it comes from how that variable is then used within the context of a mathematical model to make new conclusions about the world. An illustration would be something like this: a classical program could be used to control a coffee maker to make a cup of coffee. It would be programmed with the set of steps, the order to execute them, and values such as water temperature, coffee to water ratio, brew time etc etc. It has some variables that it will update, such as the current water temperature, and the water volume, but that is to make sure that it is behaving in line with the predetermined recipe. The machine learning version of this would be presented with a set of coffee brewing actions, a dataset of past cups of coffee made with that coffee maker, and some scores out of 10 on the quality of the cup of coffee. It will then calculate based on the dataset the sequence of actions and the value of the parameters that will lead to the best cup of coffee, based on the scores given. That is the training process. Once the training process is complete, it will used the recipe it calculated to make cups of coffee with the coffee maker. So the difference is in the outward behaviour. One makes coffee following the programmer-defined steps and parameters. The other uses historical data to calculate the steps and parameters that leads to the best output, and then makes coffee.
🌐
RudderStack
rudderstack.com › learn › machine-learning › machine-learning-vs-deep-learning
Machine learning vs deep learning | Rudderstack
Machine learning is a broad umbrella that includes a variety of algorithms that learn to perform tasks by being trained on a dataset. These tasks may range from simple tasks like regression and classification to more complex tasks like image ...
🌐
Cow-shed
cow-shed.com › blog › ai-algorithms-traditional-machine-learning-vs-deep-learning
AI Algorithms: Traditional Machine Learning vs. Deep Learning - Cow-Shed Startup
... Traditional machine learning algorithms are fundamentally statistical or mathematical models that learn patterns from data. They're typically used when the data is structured and the problem to be solved is relatively simple or well-defined.
🌐
Shelf
shelf.io › unstructured data management platform › ai education › choose your ai weapon: deep learning or traditional machine learning
Choose Your AI Weapon: Deep Learning or Machine Learning
August 14, 2024 - Traditional machine learning algorithms typically require less time to train, as they often work with smaller datasets and simpler models. This efficiency enables quicker iteration and deployment in practical applications.