Steps in machine learning
- probability
- statistics
- Linear algebra
Uncertainty involves making decisions with incomplete information, and this is the way we generally operate in the world
Statistics is a collection of tools that you can use to get answers to important questions about data.
You can use descriptive statistical methods to transform raw observations into information that you can understand and share. You can use inferential statistical methods to reason from small samples of data to whole domains
algebra We often want to describe operations abstractly to separate them from specific data or specific implementations
The effectiveness of calculus to solve a complicated but continuous problem lies in its ability to slice the problem into infinitely simpler parts, solve them separately, and subsequently rebuild them into the original whole
Uncertainty refers to imperfect or incomplete information.
Much of mathematics is focused on certainty and logic.
Much of programming is this way too, where we develop software with the assumption that it will execute deterministically. Yet, under the covers, the computer hardware is subject to noise and errors that are being checked and corrected all of the time.
Certainty with perfect and complete information is unusual. It is the place of games and contrived examples.
Almost everything we do or are interested involves information on a continuum between uncertainty or wrongness. The world is messy and imperfect and we must make decisions and operate in the face of this uncertainty.
For example, we often talk about luck, chance, odds, likelihood, and risk. These are words that we use to interpret and negotiate uncertainty in the world.
Probability provides the language and tools to handle uncertainty.
programming language
algorithms supervised learning rules based
- optimization
- programming algorithm
-
Timeseries
-
data prepration train
Each machine learning project is different because the specific data at the core of the project is different. This makes each machine learning project unique
- data science techniques
- Gradient boosting Gradient boosting is weak learner modified into topper
-
imbalanced classification
-
Deep learning keras
-
Deep learning pytorch
-
ensemble learning
- Long Short-Term Memory (LSTM) Recurrent Neural Networks
- behaviour
- nlp
- attention and transformers
- consiouness by using psychology,indian philosp
Yes, the attention mechanism in machine learning is indeed inspired by human attention! Here's how they relate:
Human Attention vs. Machine Attention
- Selective Focus:
Human Attention: When you read a sentence or analyze a scene, your brain doesn't process everything equally. Instead, it focuses on the most relevant parts based on context or task.
Machine Attention: Similarly, attention mechanisms assign weights to different parts of the input (e.g., words in a sentence, pixels in an image) to determine their relevance to the current task.
- Dynamic Adjustment:
Human Attention: Your focus can shift dynamically—for example, while reading, you might focus on specific words depending on what you're trying to understand.
Machine Attention: The model computes attention scores dynamically for each input-output pair, focusing on different input tokens for different outputs.
- Context Awareness:
Human Attention: When interpreting a phrase, humans use context to disambiguate meanings or understand relationships.
Machine Attention: Transformers use self-attention to capture relationships between words across the entire sequence, ensuring context is retained.
Example: Reading a Sentence
Human: If you read, "The cat chased the mouse, but it escaped," you might focus on "it" and connect it to "mouse" based on the context.
Machine: An attention mechanism learns these connections by assigning higher weights to "mouse" when processing "it".
Benefits of Attention (Inspired by Humans):
-
Efficiency: Just as humans focus on important details instead of processing everything, attention enables models to prioritize relevant parts of data.
-
Scalability: Attention can handle longer contexts, similar to how humans can maintain focus over extended narratives or discussions.
-
Interpretability: Attention maps reveal what the model "focuses on," providing insights into its decision-making process.
Limitations of Machine Attention:
While it mimics human attention, it lacks true conscious understanding or intuition. Machine attention relies on patterns in data, whereas humans bring experience, emotions, and reasoning to their focus.
Despite this, attention-based models like Transformers have brought machine learning closer to human-like processing, especially in areas like language understanding and generation.
you feel internally - diet - physical inactivity - sleep - stress