đź“•
Demystifying the Jargon
With so much rapid change happening right now, it can feel like there's a whole new language being created instead of just new technologies. Buzzwords and Jargon are a real blocker for accessibility, so we've put together a set of definitions for the kinds of terms we've heard.
High-Level AI Concepts
- Artificial Intelligence (AI) - The replication of human intelligence in machines so that it could perform tasks that normally would require human intelligence, such as learning, reasoning, and decision-making.
- Machine Learning (ML) - A subset of AI that enables systems to learn from data and improve performance over time without explicit programming.
- Deep Learning (DL) - A subset of ML that uses neural networks with multiple layers to process large volumes of data.
- Neural Networks: Computational models inspired by the human brain, which consist of layers of interconnected nodes like neurons.
- Natural Language Processing: This is a subfield of AI that deals with interaction between computers and human language.
- Computer Vision: The capability of AI systems to interpret and process visual information from images or videos.
- Generative AI: The AI that generates new content, such as text, images, music, or code, informed by patterns it learns from existing data.
- Foundation Model: A large-scale, general-purpose AI model trained on enormous datasets and API adaptable to many different tasks.
- Self-Supervised Learning: This type of machine learning approach involves the generation of labels by a model from raw data without human annotation.
- Reinforcement Learning - A technique in AI training wherein agents learn optimal actions through rewards or penalties from their decisions.
Model Types and Techniques
- Transformer Model - Deep learning architecture, like GPT, good at handling sequential data using attention mechanisms.
- Large Language Model - A huge AI model trained on vast text data to generate human-like language; for example, ChatGPT and Claude.
- Multimodal AI refers to AI models that process and generate content across a multitude of data types, including but not limited to text, images, and audio.
- Zero-shot learning: the ability of an AI system to perform tasks it explicitly never had training for.
- Few-shot learning: the ability of an AI model to generalize from just a few examples.
- Fine-tuning: when a previously trained AI model is adjusted for a particular task with extra data.
- Prompt Engineering: The process of designing input sequences to elicit specific outputs or responses from generative AI models.
- Semantic Search: AI-powered search that really comprehends meaning, not keyword matching.
- Embedding: A numeric representation of text, images, or any type of data that AI models use for processing and comparing.
- Latent Space: Within the AI model, there's a high-dimensional compacted representation of data in which meaningful relationships are learned.
AI Applications and Use Cases
- Chatbot – A virtual assistant powered by AI that responds to text or voice inputs in real-time.
- AIGC: Text, images, videos, or code created by generative AI models.
- Digital Twin: Virtual replicas of a physical system or process used for simulation, monitoring, or optimisation.
- Synthetic Data: Artificially generated data used to train AI models when real-world data is scarce or sensitive.
- Automated Machine Learning (AutoML) – Tools that automate the process of training and optimising ML models.
- Explainable AI (XAI) – Techniques that make AI decisions more interpretable and transparent to humans.
- Conversational AI – AI models designed to engage in natural, human-like dialogues with users.
- AI Ethics – The study of moral principles and guidelines governing the responsible development and use of AI.
- Bias in AI – Systematic errors in AI models caused by imbalanced training data or flawed design.
- AI Alignment – Ensuring that AI systems operate in ways aligned with human goals, values, and safety.
AI Hardware and Infrastructure
- Graphics Processing Unit (GPU) – High-performance processors optimised for parallel computation, essential for training deep learning models.
- Tensor Processing Unit (TPU) – Google's custom AI chips designed to accelerate deep learning workloads.
- Edge AI: AI models deployed directly on devices, such as smartphones or IoT sensors, without relying on cloud computing.
- Federated Learning: A decentralised approach to training AI models in which the data stays on the user's device and is not shared centrally.
- Quantum AI: A combination of quantum computing and AI that tries to solve complex problems that are beyond the capability of classical computers.
- Compression of AI Models: Techniques of pruning and quantisation which help reduce model size and computation without losing performance.
- Cloud AI: AI services offered via the cloud, including AWS, Google Cloud, and Azure.
- LLM Distillation: A process in which smaller models are trained to emulate large AI models at lower costs and efficiency.
- AI Orchestration: Orchestration of several AI models and workflows to ensure maximum efficiency and scalability.
- Vector Databases: Specialised databases optimised to store and search AI embeddings, such as Pinecone and FAISS.
Emerging Trends and Risks
- AI Singularity: The hypothetical point at which AI surpasses human intelligence, marking unstoppable technological shifts.
- Autonomous AI Agents: AI systems that can perform substantial tasks with limited human intervention.
- AI-Augmented Creativity: Using AI tools to extend human capabilities to create art, write, and design.
- Active AI: AI keeps learning and improving automatically by using the interactions between users.
- Adversarial AI: Methods to try and deceive an AI model by using certain misleading input.
- AI Watermarking: Techniques that help in the tracing of generated AI to avoid misinformation or misusing intellectual property.
- Hallucination: A model that makes up completely wrong information but with high confidence.
- Prompt Injection: A security vulnerability that involves manipulative attacks to lead models to generate responses not intended.
- AI Governance: Policies and frameworks to regulate AI development, deployment, and ethical considerations.
- AI Carbon Footprint: Environmental impact by training and running—especially large-scale—AI models.