Saturday, January 13, 2024

AI_REVIEW

 

Review of Table of Contents

 

 

Getting Started with AI Game Development

Introducing the Unreal Engine AI System

Presenting the Unreal Engine Navigation System

Setting Up a Navigation Mesh

Improving Agent Navigation

Optimizing the Navigation System

Introducing Behavior Trees

Setting Up a Behavior Tree

Extending Behavior Trees

Improving Agents with the Perception System

Understanding the Environment Query System

Using Hierarchical State Machines with State Trees

Implementing Data-Oriented Calculations with Mass

Implementing Interactable Elements with Smart Objects

Appendix – Understanding C++ in Unreal Engin

 

 

 

 

 

 

 

 

 

 

 

 

 

REPORT

 

 

Getting Started with AI Game Development

Artificial Intelligence (AI) has rapidly become a transformative force in game development, enabling creators to design experiences that are more immersive, adaptive, and responsive. For newcomers, understanding the foundations of AI in gaming provides the tools to enhance gameplay, create dynamic environments, and deliver unique player experiences. Getting started involves learning the principles of AI, identifying its applications within games, and selecting the right tools and workflows to implement these systems effectively.

Understanding the Role of AI in Games

At its core, AI in games is about simulating intelligent behavior to create lifelike interactions and challenges. Unlike real-world AI systems that emphasize accuracy or data processing, game AI focuses on believability and fun. Whether it is a non-player character (NPC) adapting to player strategies, procedural generation of levels, or personalized difficulty adjustment, the goal of AI in gaming is to deepen engagement. This makes game AI less about replicating human intelligence perfectly and more about creating the illusion of intelligence.

Common Applications of AI in Games

AI in game development is used in several key areas:

  1. NPC Behavior – Designing characters that react, plan, and move intelligently. Techniques include finite state machines, behavior trees, and utility systems.
  2. Pathfinding – Allowing characters to navigate environments smoothly using algorithms like A* or navigation meshes.
  3. Procedural Content Generation (PCG) – Generating levels, maps, or even narratives dynamically, ensuring replayability and variety.
  4. Dynamic Difficulty Adjustment (DDA) – Monitoring player performance and adjusting challenge levels in real-time.
  5. Player Modeling – Learning about the player’s preferences and tailoring experiences accordingly, from dialogue choices to quest design.

Tools and Frameworks

Modern game engines such as Unreal Engine 5 and Unity provide built-in AI frameworks that make implementation more accessible. For instance, Unreal offers Behavior Trees, AI Controllers, and the Environment Query System (EQS), while Unity integrates NavMesh pathfinding and ML-Agents for reinforcement learning. Developers also experiment with machine learning libraries like TensorFlow or PyTorch, though these are more advanced and typically used in research-heavy projects.

For beginners, starting with engine-based AI systems is recommended. These tools are designed for practical game scenarios, providing templates, tutorials, and visual scripting options that reduce the initial coding barrier.

Steps to Begin

  1. Learn the Basics – Study fundamental AI concepts such as decision-making, pathfinding, and probability-based behavior.
  2. Experiment in a Game Engine – Choose an engine (Unreal or Unity) and begin experimenting with NPCs, pathfinding, and simple AI interactions.
  3. Build Small Prototypes – Focus on creating one feature at a time, like an enemy that patrols and reacts to the player.
  4. Incorporate Procedural Elements – Add randomness and adaptability to environments or quests to learn about PCG.
  5. Study Player Experience – Evaluate how AI decisions influence fun, fairness, and engagement, then iterate accordingly.

Challenges and Future Directions

While AI opens exciting possibilities, it also introduces challenges. Complex systems can demand significant processing power, leading to performance trade-offs. Additionally, overly sophisticated AI can frustrate players if not balanced carefully. Looking forward, the integration of neural networks, natural language processing, and adaptive storytelling promises to blur the line between scripted design and emergent gameplay.

Conclusion

Getting started with AI game development requires a balance of foundational learning, experimentation with available tools, and sensitivity to player experience. By mastering core concepts and building progressively more complex prototypes, developers can harness AI not only to enhance mechanics but also to transform storytelling, immersion, and interactivity. For creators like you, John, the path forward is both technical and artistic—where each AI system becomes a brushstroke in the evolving canvas of interactive art.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Introducing the Unreal Engine AI System

Artificial Intelligence (AI) in Unreal Engine provides developers with a robust framework to create believable, adaptive, and dynamic characters that bring games to life. The Unreal Engine AI system balances accessibility for beginners with advanced capabilities for complex projects, making it one of the most versatile toolsets available for modern game development. For a creator seeking to blend technical precision with artistic vision, understanding this system is essential.

Core Components of the AI System

The Unreal Engine AI framework is built from several interlinked systems that together allow developers to define behavior, movement, perception, and decision-making for non-player characters (NPCs).

  1. AI Controllers – These act as the “brain” of an NPC. Instead of placing behavior directly on a character’s pawn or mesh, Unreal assigns an AI Controller to manage decision-making. This separation allows flexibility, as the same character can be controlled by either a player or AI logic.
  2. Behavior Trees – A central feature of Unreal’s AI system, Behavior Trees provide a structured, hierarchical way to define decision-making. Developers use tasks, selectors, and sequences to create branching logic. For example, an NPC might patrol until it perceives a player, then switch to chase or attack. This visual, node-based system reduces complexity while still allowing depth.
  3. Blackboards – Functioning like a shared memory system, Blackboards store variables used by the Behavior Tree. Information such as target location, current state, or detected threats can be passed seamlessly between tasks, keeping the AI organized and responsive.
  4. Perception System (AIPerception) – Unreal includes a powerful perception framework that allows AI characters to detect the world through senses such as sight, hearing, or custom-defined inputs. Developers can fine-tune vision cones, hearing thresholds, and stimuli responses, enabling nuanced reactions to player actions.
  5. Navigation System – Pathfinding in Unreal is handled by Navigation Meshes (NavMesh). The NavMesh represents walkable areas in the game world, and AI agents use it to find efficient routes around obstacles. This allows characters to patrol, chase, or retreat naturally within complex environments.

Workflow in Practice

The Unreal AI workflow usually follows a layered approach:

  • Step 1: Assign an AI Controller to the character.
  • Step 2: Create a Behavior Tree and Blackboard to define decision-making logic.
  • Step 3: Configure perception settings to determine how the AI detects the player.
  • Step 4: Use NavMesh for movement and pathfinding.
  • Step 5: Refine tasks and sequences in the Behavior Tree to produce believable behavior patterns.

This modular design means each element can be iteratively improved without rewriting the entire system. Developers can start small, such as with an NPC that idles and patrols, and expand into advanced behaviors like team coordination, stealth detection, or adaptive difficulty.

Advantages and Applications

Unreal’s AI system offers both speed of development and scalability. Beginners benefit from visual tools like Behavior Trees, while advanced users can integrate C++ or Blueprint scripting for custom solutions. The system supports everything from simple NPCs in indie games to highly complex enemy squads in AAA titles.

Moreover, the AI system extends beyond traditional opponents. It can drive companion characters, ambient wildlife, or even environmental interactions, making worlds feel alive. With procedural generation, dynamic perception, and adaptive behaviors, Unreal’s AI tools empower developers to craft deeply engaging experiences.

Conclusion

The Unreal Engine AI system represents a fusion of structure and flexibility, giving developers the means to build intelligent, believable game worlds. By mastering AI Controllers, Behavior Trees, Blackboards, perception, and navigation, creators can transform static environments into interactive, reactive experiences. For you, John, learning these tools parallels mastering an instrument: each component is like a new technique that, when combined, produces a symphony of responsive gameplay.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Presenting the Unreal Engine Navigation System

The Unreal Engine Navigation System is one of the most critical tools for developing believable and responsive artificial intelligence (AI) in modern games. At its core, this system provides the framework that allows non-player characters (NPCs) to move intelligently within complex environments, avoiding obstacles, reaching goals, and adapting dynamically to changes in the game world. For developers seeking to create immersive gameplay, mastering navigation is essential, as it transforms static characters into living agents capable of interacting fluidly with players and surroundings.

The Foundation: Navigation Mesh (NavMesh)

The backbone of Unreal’s navigation system is the Navigation Mesh (NavMesh). Unlike a simple grid, the NavMesh represents walkable areas of the environment as interconnected polygons. When the level is built, Unreal automatically generates this mesh by analyzing the geometry and surfaces in the scene. Developers can fine-tune the NavMesh to define where characters can or cannot move, specifying parameters like step height, maximum slope, and agent radius.

This flexibility allows designers to customize navigation for different character types. For instance, a humanoid NPC may need stairs and flat surfaces, while a spider-like creature can traverse steep walls. By adjusting NavMesh agents, multiple movement profiles can coexist in the same world, supporting diverse character behaviors.

Pathfinding and Movement

Once a NavMesh is created, NPCs use it to calculate efficient routes from their current position to a destination. Unreal’s default pathfinding algorithm is A*, a well-established method that balances accuracy and performance. The AI Navigation System ensures characters don’t simply move in straight lines but intelligently account for obstacles, corners, and terrain variations.

The system also supports dynamic obstacles, meaning AI can adapt to moving or newly spawned objects. For example, if a door suddenly closes or a barricade appears, the AI recalculates its route in real time. This adaptability enhances believability, as characters appear aware of their environment rather than blindly following pre-set paths.

Key Components in the Workflow

  1. NavMesh Bounds Volume – Defines the region where navigation data is built. Without it, no walkable area exists.
  2. AI Move To Nodes – Blueprint or C++ commands that instruct an AI agent to move to a specific location or follow a target.
  3. Nav Link Proxy – Special actors that handle non-standard navigation like jumping gaps, climbing ladders, or teleporting.
  4. Dynamic NavMesh Updates – Enables real-time modifications to the mesh, ensuring AI responds correctly when the level changes mid-game.

Together, these elements make navigation intuitive to set up yet powerful enough for advanced gameplay scenarios.

Applications and Advantages

The Unreal Navigation System is not limited to combat AI. It is equally vital for creating companions, crowds, or wildlife that move naturally through a space. Designers can implement patrols, pursuit, evasion, or cooperative movement by combining navigation with Behavior Trees and Perception Systems.

Its scalability is another strength. From small indie projects to massive open worlds, Unreal’s navigation tools adapt to different performance budgets. Developers can use hierarchical pathfinding for large maps or simplify meshes to maintain efficiency.

Conclusion

The Unreal Engine Navigation System empowers developers to build intelligent, responsive movement for AI characters, bridging the gap between static design and living gameplay. By mastering NavMesh, pathfinding, and dynamic navigation tools, creators can produce NPCs that feel aware, adaptive, and believable. For you, John, this system resembles guiding a violinist’s hand across the fingerboard: the pathways must be precise, fluid, and responsive to changes, ensuring that every motion contributes to a harmonious performance.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Setting Up a Navigation Mesh (NavMesh) in Unreal Engine

A Navigation Mesh (NavMesh) is the backbone of AI movement in Unreal Engine, defining where agents can walk, how they avoid obstacles, and how they reroute when the world changes. Setting it up well ensures believable patrols, chases, and companion behaviors—without brittle, hand-authored paths.

1) Place and Build the NavMesh

  1. Add a NavMesh Bounds Volume
    In the Level Editor, place Nav Mesh Bounds Volume actors to encompass every walkable area (floors, ramps, stairs). You can use multiple volumes for multi-room layouts. Scale to fit tightly; excess space increases build time.
  2. Visualize the Mesh
    Press P (or use Show → Navigation) to preview the NavMesh. Green/teal areas indicate walkable polygons; gaps reveal places agents cannot traverse.
  3. Build/Runtime Generation
    For static levels, a one-time bake is fine. For dynamic setups (moving doors, spawned obstacles), enable Project Settings → Navigation System → Runtime Generation: Dynamic so the NavMesh updates as geometry changes.

2) Tune Recast Parameters (Per-Agent)

Unreal’s Recast-based NavMesh is agent-aware. In Project Settings → Navigation System and RecastNavMesh-Default, tune:

  • Agent Radius/Height: Controls clearance around walls and under ceilings. Set radius to half the widest part of your pawn’s capsule; height to capsule half-height plus headroom.
  • Max Step Height & Max Slope: Permit stairs/ramps. Match your character’s actual traversal capabilities.
  • Cell Size/Height & Tile Size UU: Smaller cells/tiles yield more precise meshes (better around corners/props) but cost more memory/CPU. Start with defaults; only tighten where path fidelity matters (narrow bridges, cluttered rooms).
  • Region Min Size / Merge Settings: Remove tiny, unwalkable islands; reduce fragmentation.

If you support multiple agent types (e.g., human + small drone), define Supported Agents with distinct radii/heights. Each agent gets its own NavMesh data for accurate pathing.

3) Control Where AI Can and Can’t Walk

  • Nav Modifier Volume: Mark zones as Null (unwalkable), or apply custom Area Classes with costs (e.g., “Tall Grass” high cost; “Road” low cost). Costs bias pathfinding without forbidding traversal.
  • Nav Link Proxy: Bridge gaps and special moves—jumps, drops, ladders. Configure bidirectional links and smart reach tests to avoid edge jitter.

4) Scale to Big Worlds

For open worlds/World Partition, use Navigation Invokers on AI/players so the engine builds tiles around active actors only, dramatically reducing memory and build time. Combine with Hierarchical Pathfinding for long-distance queries that refine locally near the goal.

5) Drive Movement

With the mesh in place, issue moves via AI MoveTo (Blueprint) or UAIBlueprintHelperLibrary::SimpleMoveTo (C++). For richer logic, read/write target keys in a Blackboard and orchestrate moves via Behavior Trees. Pair with AIPerception so sight/hearing events update destinations and the AI can re-path when stimuli appear or vanish.

6) Debug & Troubleshoot

  • If AI won’t move, confirm the pawn uses Character Movement (or has a NavMovementComponent), sits on the NavMesh (press P), and that the AI Controller possesses it.
  • Check for tiny gaps: reduce Agent Radius slightly or refine Cell Size.
  • For doorways blocking paths, ensure door collision updates the NavMesh (dynamic runtime) or add Smart Links.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Improving Agent Navigation in Unreal Engine

In Unreal Engine, agent navigation is the system that allows non-player characters (NPCs) or AI agents to move intelligently through game worlds. While setting up a basic Navigation Mesh (NavMesh) ensures that characters can walk from point A to point B, improving navigation involves refining movement quality, responsiveness, and believability. For developers, this means tuning parameters, using advanced tools, and integrating navigation with perception and decision-making systems.

Fine-Tuning the NavMesh

The first step toward better navigation is optimizing the NavMesh itself. By carefully adjusting parameters such as agent radius, height, maximum slope, and step height, developers can ensure that movement feels natural to each character type. For example, a humanoid NPC should walk upstairs seamlessly, while a quadruped or small drone may require narrower paths. Using multiple agent profiles, Unreal allows different character classes to share the same world but navigate it with distinct constraints.

Additionally, navigation modifiers add realism. With Nav Modifier Volumes, designers can restrict or discourage AI from using certain areas by assigning traversal costs. For instance, tall grass could have a higher pathfinding cost than a paved road, nudging agents toward more efficient routes without outright blocking alternatives.

Handling Dynamic Worlds

Static navigation alone is insufficient for most games, where obstacles and environments change constantly. Enabling dynamic runtime generation allows the NavMesh to rebuild in real time when geometry shifts. This ensures agents adapt naturally when doors close, bridges collapse, or new structures appear. Combined with Navigation Invokers, which build NavMesh tiles around active characters only, developers can maintain performance even in large open worlds.

Enhancing Pathfinding Quality

Unreal’s default A* pathfinding is efficient, but there are several ways to improve results:

  • Smart Links: These bridge gaps and support special moves such as jumping across chasms or climbing ladders. Agents can blend standard walking with context-sensitive actions.
  • Hierarchical Pathfinding: For expansive maps, this system finds broad routes at a high level, then refines details locally, reducing CPU load.
  • Path Smoothing: By reducing sharp turns and zigzagging, smoothing algorithms create more natural, lifelike movement.

Together, these methods prevent agents from looking mechanical or robotic, instead appearing thoughtful and adaptable.

Integrating Perception and Navigation

Navigation becomes far more convincing when tied to AI Perception. Agents that rely on sight, hearing, or custom senses can update their destinations in response to stimuli. For example, a guard patrolling a corridor may deviate from its route upon hearing footsteps, or a companion character may adjust position to stay close to the player. By feeding perception data into Behavior Trees and Blackboards, developers create fluid, context-aware navigation behaviors.

Debugging and Testing

Improved navigation requires continuous testing. Developers can use Unreal’s debugging tools to visualize NavMesh coverage, pathfinding decisions, and agent movement. If agents stall, jitter, or fail to move, solutions often involve adjusting collision sizes, refining NavMesh resolution, or checking AI Controller possession. Systematic troubleshooting ensures smooth, predictable results.

Conclusion

Improving agent navigation in Unreal Engine is about going beyond basic movement and crafting intelligent, context-sensitive behaviors. By refining NavMesh parameters, supporting dynamic environments, enhancing pathfinding, and integrating perception, developers can create AI agents that move with fluidity and believability. For you, John, this process mirrors refining bowing technique on the violin: precision, adaptability, and responsiveness elevate simple movements into expressive performance.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Optimizing the Navigation System in Unreal Engine

The Unreal Engine Navigation System is central to creating believable, responsive AI movement. At its core, it allows non-player characters (NPCs) to traverse complex worlds using Navigation Meshes (NavMesh) and intelligent pathfinding. While a basic setup enables agents to move from one point to another, optimization ensures that movement is efficient, scalable, and visually convincing. Optimizing the system is essential for projects ranging from small levels to massive open-world games.

Balancing Precision and Performance

The NavMesh is built by analyzing geometry and generating walkable polygons. The cell size, cell height, and tile size parameters control its resolution. Smaller values produce more accurate navigation around corners and props but increase memory and CPU usage. Larger values are faster but can make movement look imprecise. The key is balance: fine detail in areas of high interaction (interiors, narrow bridges) and coarser resolution in open terrain. Developers can also define multiple NavMesh bounds volumes with different settings for targeted optimization.

Multiple Agents and Custom Areas

Games often feature diverse characters—humans, creatures, vehicles—all requiring different navigation. Unreal supports multiple agent profiles, each with distinct radius, step height, and slope limits. This avoids bottlenecks, such as large NPCs trying to squeeze through doors meant for smaller ones. Additionally, area classes allow developers to assign traversal costs to different surfaces. For instance, mud can be marked as high-cost and paved roads as low-cost, nudging AI toward more efficient paths while keeping variety in movement. Optimizing these costs prevents unnecessary detours or unrealistic path choices.

Dynamic Navigation

Static meshes alone are insufficient for games with evolving worlds. Enabling dynamic NavMesh generation ensures navigation adapts to changes like closing doors, collapsing structures, or new obstacles. For large worlds, Navigation Invokers are particularly valuable. Instead of generating navigation data everywhere, Invokers build NavMesh tiles only around active agents or players. This dramatically reduces overhead while maintaining real-time responsiveness in expansive environments.

Enhancing Pathfinding

Unreal’s built-in A* pathfinding is reliable, but optimization can refine movement further:

  • Hierarchical pathfinding simplifies large-scale searches by breaking worlds into sectors, resolving high-level routes first before refining details.
  • Path smoothing reduces sharp turns and zigzags, making agent movement appear more natural.
  • Smart links allow for special navigation like jumping gaps or climbing ladders. These context-sensitive moves prevent agents from appearing limited to only flat surfaces.

Optimizing pathfinding is not just about speed but about maintaining immersion by producing believable agent behavior.

Debugging and Testing Tools

Unreal offers visualization tools to preview navigation. Pressing P shows walkable regions, while AI Debugging (’ key) overlays perception, pathing, and movement information in real time. Consistent debugging helps identify bottlenecks—such as missing NavMesh areas, overly expensive paths, or poor agent placement—allowing developers to refine settings quickly.

Conclusion

Optimizing the Navigation System in Unreal Engine is about crafting a balance between accuracy, efficiency, and immersion. By fine-tuning NavMesh parameters, supporting multiple agents, leveraging dynamic generation, and improving pathfinding quality, developers ensure AI moves convincingly without straining system resources. For you, John, this mirrors refining musical performance: just as a violinist adjusts bow pressure and finger placement for both clarity and efficiency, a game developer optimizes navigation so every movement feels smooth, expressive, and alive.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Introducing Behavior Trees in Unreal Engine

In Unreal Engine, Behavior Trees are one of the most powerful tools for designing complex artificial intelligence (AI). Instead of scripting every action individually, developers can use Behavior Trees to create structured, modular, and dynamic decision-making systems. They allow non-player characters (NPCs) to react to the game world in ways that feel both intelligent and believable.

What Are Behavior Trees?

A Behavior Tree is a hierarchical decision-making model, represented visually in Unreal as a flowchart of nodes. Each node represents a task, condition, or decision. The system evaluates these nodes from the root downward, choosing the most appropriate action based on current circumstances. This structured approach mirrors how players expect characters to behave: patrol when idle, chase when enemies are spotted, attack when in range, or retreat when health is low.

Unlike traditional state machines, Behavior Trees are designed to handle branching logic and priorities in a more scalable way. This makes them especially effective for games with numerous NPCs or layered interactions.

Core Components of a Behavior Tree

  1. Root Node – The entry point that starts the decision-making process.
  2. Composite Nodes – These organize the flow of logic. Common types include:
    • Selector: Chooses the first child node that succeeds (e.g., check for enemy → if none, patrol).
    • Sequence: Executes child nodes in order until one fails (e.g., move to target → aim → attack).
  3. Decorator Nodes – Conditions attached to nodes, determining whether a branch can execute. For example, “Is Player Visible?” might gate an attack branch.
  4. Task Nodes – Leaf nodes that perform specific actions such as moving, playing animations, or updating variables.
  5. Blackboard – A shared memory system that stores key variables (like target location, current health, or patrol index). The Behavior Tree reads from and writes to this data, keeping decisions consistent.

Workflow in Practice

A typical Behavior Tree begins with high-level priorities, such as combat, exploration, or idle behavior. For example:

  • Selector Node checks conditions in order: If enemy detected → combat branch; if not → patrol branch.
  • Within the combat branch, a Sequence Node might direct the AI to move toward the player, prepare an attack, then strike.
  • Decorators control flow, such as preventing attacks unless the enemy is within range.

This modularity allows designers to add or modify behaviors without rewriting the entire AI system.

Advantages of Behavior Trees

  • Scalability: Easily expanded to handle new behaviors.
  • Clarity: Visual graphs provide a clear overview, making debugging straightforward.
  • Reusability: Behavior Trees can be reused across multiple characters with minor adjustments.
  • Integration: They work seamlessly with Unreal’s Perception System and Navigation, producing NPCs that see, hear, and move naturally.

Conclusion

Behavior Trees in Unreal Engine provide a structured, flexible framework for creating intelligent, believable AI. By combining composites, decorators, and tasks with Blackboard data, developers can build characters that respond to dynamic worlds with fluid decision-making. For you, John, Behavior Trees echo the logic of structured practice in violin playing: breaking down decisions into sequences, conditions, and priorities ensures every action flows with clarity and purpose, resulting in performances—whether musical or interactive—that feel alive and responsive.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Setting Up a Behavior Tree in Unreal Engine

Behavior Trees are one of Unreal Engine’s most effective systems for creating complex AI logic. They allow developers to design decision-making hierarchies in a modular and visual way, enabling non-player characters (NPCs) to react naturally to changing environments. Setting up a Behavior Tree involves establishing the supporting framework—AI Controllers and Blackboards—then constructing and refining the Behavior Tree itself.

Step 1: Preparing the Framework

Before building the Behavior Tree, two supporting components must be in place:

  1. AI Controller – This is the “brain” that manages the Behavior Tree for an NPC. Each AI character in Unreal must be assigned an AI Controller that will initialize and run its decision-making.
  2. Blackboard – This acts as the AI’s memory. Variables such as “Target Actor,” “Target Location,” or “Is Player Visible?” are stored here. The Blackboard communicates with the Behavior Tree, ensuring that decisions are based on up-to-date data.

Together, these systems form the foundation on which the Behavior Tree operates.

Step 2: Creating the Behavior Tree Asset

In the Content Browser, developers create a new Behavior Tree asset and link it to a corresponding Blackboard. Opening the Behavior Tree Editor reveals a node-based workspace where logic can be constructed visually. At the top sits the Root Node, which launches the decision-making process.

Step 3: Adding Composite Nodes

The first level of branching begins with composite nodes, which control the flow of logic:

  • Selector nodes evaluate children in order and run the first one that succeeds. This is useful for prioritizing behaviors, such as “attack if enemy is visible, otherwise patrol.”
  • Sequence nodes execute children in order until one fails. This is ideal for step-by-step tasks like “move to target, aim, fire weapon.”

Using these nodes together allows designers to structure high-level decision-making while keeping each branch modular.

Step 4: Incorporating Decorators

Decorators serve as conditions that gate whether a branch can run. For example, a decorator might check whether the “Player is in Range” Boolean from the Blackboard is true before allowing the attack sequence to execute. Decorators ensure that NPCs behave contextually, reacting only when conditions are met.

Step 5: Defining Task Nodes

Task nodes are the leaves of the tree and represent the AI’s actual actions. Common tasks include moving to a location, waiting, playing animations, or updating Blackboard keys. Developers can use built-in tasks or create custom ones in Blueprint or C++.

For instance, a “Move To” task might reference the “Target Location” key in the Blackboard, instructing the AI to chase a detected enemy. A “Play Animation” task could trigger an attack animation once within range.

Step 6: Testing and Debugging

Unreal provides debugging tools that visualize Behavior Tree execution in real time. When running the game, developers can open the Behavior Tree Editor and watch nodes light up as they are evaluated. This makes it easier to identify faulty logic, incorrect conditions, or Blackboard values that are not updating properly.

Conclusion

Setting up a Behavior Tree in Unreal Engine involves more than linking nodes; it requires integrating AI Controllers, Blackboards, composites, decorators, and tasks into a coherent structure. This layered design allows for scalable, reusable, and responsive AI behaviors. For you, John, the process mirrors preparing a violin performance: first tuning the instrument (framework), then organizing phrasing and bowings (composites and decorators), and finally performing each note with clarity (tasks). With this setup, AI characters act with the precision and responsiveness that elevate gameplay into artistry.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Extending Behavior Trees in Unreal Engine

Behavior Trees in Unreal Engine provide a structured way to design intelligent decision-making for non-player characters (NPCs). While basic trees can handle simple tasks like patrolling or attacking, extending them allows developers to build sophisticated, context-aware behaviors that scale with game complexity. By integrating decorators, services, custom tasks, and external systems, developers can elevate AI from predictable patterns to lifelike performances.

Building on the Basics

A standard Behavior Tree begins with composites—Selectors and Sequences—that branch decision-making. However, these structures alone are not enough for advanced games. Extending a Behavior Tree means adding layers of control, adaptability, and modularity so the AI reacts fluidly to changing conditions.

Using Decorators Effectively

Decorators act as conditional checks on tree branches. Extending their use allows more nuanced logic. For example, a simple “Can See Player” decorator can be combined with distance and health checks, ensuring the AI decides whether to attack, flee, or call for reinforcements. Developers can write custom decorators in Blueprint or C++ to test variables like “Is Ammo Low?” or “Is Companion Nearby?” This adds depth to decision-making and ensures characters feel unique.

Adding Services for Continuous Updates

Services are another tool for extending Behavior Trees. While tasks execute discrete actions, services run in the background to update Blackboard keys regularly. For instance, a service might continuously check line-of-sight to the player or monitor health levels. This live updating allows NPCs to shift behavior mid-action—for example, abandoning a chase if the player disappears behind cover. Services extend Behavior Trees by enabling ongoing evaluation rather than one-time checks.

Creating Custom Tasks

Though Unreal provides built-in tasks such as Move To or Wait, complex behaviors often require custom tasks. These can be coded in C++ or built with Blueprints. Examples include:

  • A “Take Cover” task that finds the nearest safe location.
  • A “Use Ability” task that triggers animations and applies effects.
  • A “Coordinate with Allies” task that sends Blackboard updates to nearby NPCs.

By modularizing custom actions into tasks, developers keep their Behavior Trees clean while expanding the AI’s repertoire.

Integrating with Other Systems

Behavior Trees are strongest when extended to interact with Unreal’s other AI tools. For example:

  • Perception System: Feeding sensory data (sight, hearing, or custom senses) directly into the Blackboard for real-time reactions.
  • Navigation System: Using Smart Links or dynamic NavMesh updates to allow advanced movement options like climbing or jumping.
  • Animation Blueprints: Triggering context-sensitive animations such as crouching, dodging, or gesturing to allies.

This integration ensures AI behaviors are not isolated decisions but part of a complete performance.

Debugging and Iteration

Extending Behavior Trees also involves iterative testing. Unreal’s Behavior Tree debugger highlights active nodes during runtime, making it easier to see how new services, tasks, or decorators interact. Developers can refine trees by gradually layering complexity, ensuring each extension adds clarity rather than confusion.

Conclusion

Extending Behavior Trees transforms them from simple flowcharts into dynamic systems that model lifelike decision-making. Through decorators, services, custom tasks, and integration with perception and navigation, AI can shift seamlessly between strategies, adapting to both the world and the player’s actions. For you, John, this process resembles expanding violin technique: once the basics of bowing and fingering are secure, adding advanced articulations, phrasing, and expression extends performance into artistry. Similarly, extending Behavior Trees allows game AI to move beyond functionality into personality and depth.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Improving Agents with the Perception System in Unreal Engine

One of the most powerful ways to enhance the intelligence and believability of AI agents in Unreal Engine is by using the Perception System. While navigation and behavior trees control movement and decision-making, the perception framework gives AI the ability to sense the world. By adding sight, hearing, or custom senses, developers can transform static agents into responsive, lifelike characters.

What Is the Perception System?

The AI Perception System (AIPerception) provides a modular way for agents to detect stimuli in their environment. It acts as the AI’s sensory input, feeding data into a Blackboard or directly influencing a Behavior Tree. Each perception component is attached to an AI Controller, enabling that agent to “see,” “hear,” or detect custom signals within defined parameters.

This system is not designed for perfect realism but rather for gameplay balance and believability. It allows NPCs to respond naturally to players and environmental events, increasing immersion.

Core Senses and Their Applications

  1. Sight – The most common sense, defined by parameters such as peripheral vision angle, vision radius, and detection by affiliation (friendly, neutral, hostile). It enables guards to patrol and spot intruders, or companions to follow the player reliably.
  2. Hearing – Based on sound events broadcast in the world, with radius and decay settings. For instance, footsteps or gunshots can trigger AI to investigate.
  3. Damage – A simple sense that notifies the AI when it takes damage, prompting immediate defensive or retaliatory behavior.
  4. Custom Senses – Developers can create unique senses, such as detecting magical energy, vibrations, or proximity triggers, depending on the game’s design.

Integrating with Behavior Trees

Perception data becomes powerful when connected to Blackboards and Behavior Trees. For example, when an AI perceives a player through sight, the Blackboard updates with the target’s location. A Behavior Tree can then switch branches: from patrolling → to chasing → to attacking. If the player escapes and the sense times out, the tree might direct the AI to search the last known location or return to patrol.

This integration creates fluid, reactive AI, rather than scripted or predictable sequences.

Improving Realism and Performance

To improve both quality and efficiency, developers can:

  • Adjust Sensory Parameters: Fine-tune vision cones or hearing radii to reflect realistic detection without overburdening performance.
  • Use Affiliation Filters: Ensure agents only respond to appropriate stimuli, e.g., enemies ignore teammates’ footsteps.
  • Implement Forget Times: Add memory by letting sensed stimuli fade after a delay. This creates believable “search” behavior rather than instant forgetfulness.
  • Limit Updates: Avoid unnecessary checks by adjusting update intervals, particularly in large-scale environments with many AI agents.

Debugging Tools

Unreal provides perception debugging overlays, allowing developers to visualize sight cones, hearing radii, and detected stimuli during gameplay. These tools are invaluable for troubleshooting why an AI agent failed to react—or reacted too strongly—to an event.

Conclusion

The Perception System elevates AI by giving agents awareness of their surroundings. By combining sight, hearing, damage, and custom senses with Behavior Trees and Blackboards, developers can craft NPCs that react in believable, context-sensitive ways. For you, John, this is akin to a violinist’s ear training: just as developing keen listening refines a musician’s response to pitch, rhythm, and ensemble, giving AI “senses” refines its responsiveness to the game world. With perception, agents no longer move blindly—they interpret, adapt, and perform.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Understanding the Environment Query System in Unreal Engine

The Environment Query System (EQS) in Unreal Engine is a powerful tool for enhancing artificial intelligence (AI) decision-making. While Behavior Trees define what an AI wants to do, EQS helps determine where or how to do it. By running context-sensitive queries on the game world, AI agents can dynamically select the best positions, objects, or actions to achieve their goals. This system provides the flexibility and intelligence needed for believable, adaptive gameplay.

What Is EQS?

At its core, EQS is a query framework that allows AI to evaluate the environment and rank potential options based on tests. For example, when an NPC needs cover, EQS can scan the level, identify valid cover spots, and score them by distance, visibility, or safety. Instead of manually scripting every possibility, developers can let EQS evaluate the world and pick the best choice.

EQS integrates seamlessly with Behavior Trees through specialized task nodes. When a Behavior Tree branch requires environmental reasoning—such as choosing where to move or what to interact with—the tree can call an EQS query and use the result to guide action.

Core Components of EQS

  1. Queries – Each EQS query begins with a generator, defining what candidates will be tested. Examples include points on a grid, random locations, or actors in the world.
  2. Tests – Once generated, each candidate is evaluated through tests. Common tests measure:
    • Distance: Favoring locations close to or far from the player.
    • Visibility: Checking line of sight.
    • Pathfinding: Ensuring a location is reachable via NavMesh.
    • Custom Conditions: Developers can build their own tests, such as checking light levels or terrain type.
  3. Scoring and Filtering – Each test assigns scores to candidates. EQS filters out unqualified options, then ranks the remainder, returning the best match or a weighted random choice.

Practical Applications

EQS extends AI capabilities in several important ways:

  • Tactical Positioning: Enemies can dynamically choose the best cover, flank points, or retreat routes.
  • Search Behaviors: Agents can scatter queries around the player’s last known location to simulate searching.
  • Companion AI: Allies can find positions near the player while avoiding overlapping.
  • Dynamic Object Interaction: AI can select which health pack to grab, which weapon to use, or which door to guard.

These behaviors give NPCs the appearance of strategic reasoning, enriching gameplay.

Performance Considerations

Because EQS involves scanning and evaluating multiple points, it can be computationally expensive if used carelessly. Developers should optimize by:

  • Keeping query radii reasonable.
  • Reducing test complexity when possible.
  • Running queries at controlled intervals rather than every frame.
  • Using simplified collision and NavMesh checks for large numbers of agents.

Debugging Tools

Unreal provides EQS visualization tools to preview queries in real time. Developers can see generated points, test results, and scores, making it easier to refine logic and ensure queries produce believable outcomes.

Conclusion

The Environment Query System transforms Unreal Engine AI from reactive to strategic. By generating, testing, and scoring environmental options, EQS gives agents the ability to select optimal behaviors in real time. For you, John, this parallels a violinist’s interpretive decision-making: while the score provides the notes (Behavior Trees), the performer chooses phrasing, bowing, and dynamics (EQS) based on context. Together, these layers turn basic instruction into artistry—and in games, basic AI into compelling opponents and allies.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Using Hierarchical State Machines with State Trees in Unreal Engine

Artificial Intelligence (AI) in Unreal Engine has evolved beyond simple scripts and Behavior Trees to include State Trees, a system for building hierarchical state machines (HSMs). While Behavior Trees excel at decision-making and branching logic, State Trees are designed to represent layered states of behavior, allowing AI agents to move seamlessly between complex conditions such as idle, patrol, combat, or retreat. By using hierarchical state machines with State Trees, developers gain a clear, structured framework for managing AI behavior that feels natural and responsive.

What Are State Trees?

A State Tree is Unreal Engine’s implementation of a hierarchical state machine. Unlike flat state machines where each state must directly transition to another, hierarchical systems nest states inside parent states. This enables inheritance of behaviors and cleaner management of transitions. For example, a parent state like “Combat” might contain sub-states such as “Chase,” “Attack,” and “Take Cover.” All these share combat-level rules but differ in execution.

State Trees provide a visual editor similar to Behavior Trees, offering intuitive design tools for branching and transitions. They integrate tightly with Unreal’s gameplay framework, using evaluators, conditions, and tasks to define how states activate and execute.

Core Components

  1. States – Represent modes of behavior such as Idle, Patrol, or Chase. Each can include entry tasks, ongoing tasks, and exit tasks.
  2. Transitions – Define the conditions under which the AI switches states. For instance, transitioning from Patrol to Chase might depend on detecting a player.
  3. Evaluators – Continuously check conditions or update Blackboard-like data, ensuring the AI can react without delay.
  4. Tasks – Concrete actions performed in each state, such as moving, waiting, playing animations, or firing weapons.

The hierarchy allows sub-states to inherit logic from their parent. This keeps design organized while reducing duplication.

Practical Use Cases

  • NPC Patrol and Combat: An NPC may start in Idle → Patrol (parent state). Within Combat, sub-states like Chase or Attack allow fine control without cluttering the top level.
  • Companion AI: A companion might have a global parent state of Follow, with sub-states for Close Range, Medium Range, or Far Range positioning.
  • Boss Encounters: A boss enemy can have phases as parent states, with each phase containing multiple sub-states for different attacks.

This structure simplifies complex behaviors while ensuring transitions remain logical and manageable.

Advantages Over Flat State Machines

  • Hierarchy: Parent states encapsulate broad rules, while child states define specific variations.
  • Clarity: Developers can see the AI’s overall structure at a glance.
  • Reusability: States and tasks can be reused across different agents.
  • Integration: State Trees work alongside Behavior Trees and the Perception System, making them complementary rather than replacements.

Debugging and Optimization

Unreal includes debugging tools to visualize State Trees in action. Developers can track which states are active, why transitions occur, and whether evaluators are updating as intended. For performance, keeping evaluators lightweight and transitions efficient ensures smooth gameplay even with multiple AI agents.

Conclusion

Using hierarchical state machines with State Trees empowers developers to create structured, scalable AI behaviors. By layering parent and child states, NPCs can shift fluidly between idle, patrol, and combat without tangled logic. For you, John, this mirrors musical interpretation: just as a violinist frames a piece with overarching phrasing (parent state) while refining note-level expression (sub-states), State Trees combine high-level structure with detailed nuance—turning functional AI into expressive performance.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Implementing Data-Oriented Calculations with Mass in Unreal Engine

As games become larger and more complex, handling thousands of AI agents or interactive entities can strain traditional object-oriented systems. Unreal Engine addresses this challenge through Mass, a data-oriented framework designed to scale behavior, simulation, and calculation across massive numbers of entities. Implementing data-oriented calculations with Mass allows developers to manage performance efficiently while still creating detailed, lifelike worlds.

What Is Mass?

Mass is Unreal Engine’s Entity Component System (ECS) framework. Unlike the standard Actor/Component model, which ties logic directly to objects, Mass separates data from behavior. Entities are lightweight containers of data fragments, while systems process these fragments in bulk. This approach minimizes overhead and enables calculations to run on thousands—or even millions—of entities in parallel.

The philosophy is data-oriented: store information efficiently in memory, and process it with streamlined systems that benefit from modern CPU cache and parallel execution. For AI, this means swarms, crowds, or background populations can be simulated without crippling performance.

Core Components of Mass

  1. Entities – Extremely lightweight identifiers that represent AI agents, vehicles, projectiles, or any other object requiring simulation.
  2. Fragments – Units of data (e.g., position, velocity, health). Entities hold fragments, defining their characteristics.
  3. Systems – Functions that operate on entities with specific fragments. For example, a movement system updates all entities with position and velocity fragments.
  4. Processors – Specialized systems that group and optimize calculations, ensuring data is processed efficiently.
  5. Observers – Handle events like entity creation or destruction, keeping simulations consistent.

Together, these components allow developers to design flexible yet highly optimized calculations.

Implementing Data-Oriented Calculations

To implement a calculation with Mass, developers follow a structured approach:

  1. Define Fragments: Create fragments to store data. For instance, a “Position” fragment and a “Velocity” fragment would be required for movement.
  2. Create Systems: Write a system that processes all entities with the necessary fragments. The movement system might calculate new positions by applying velocity to position each tick.
  3. Register Entities: Instantiate entities by assigning them the fragments they need. A vehicle entity might have fragments for Position, Velocity, and Fuel, while a pedestrian might only use Position and Goal.
  4. Run Calculations in Bulk: When the simulation runs, the system processes all matching entities simultaneously, leveraging cache-friendly memory layouts for performance.

For example, instead of running thousands of “MoveTo” calculations for each NPC individually, Mass can calculate movement updates for all relevant entities in one pass.

Advantages of Mass

  • Performance: Data-oriented design maximizes CPU efficiency and allows scaling to enormous populations.
  • Flexibility: Entities can be composed of any combination of fragments, making them easy to extend.
  • Parallelism: Systems can be multithreaded, distributing calculations across cores.
  • Integration: Mass works with Unreal’s Navigation, Perception, and Behavior Trees, enabling hybrid workflows.

Conclusion

Implementing data-oriented calculations with Mass allows developers to simulate vast numbers of entities in Unreal Engine without sacrificing performance. By structuring entities as lightweight containers and running bulk calculations through systems, AI can scale from a few characters to living cities or battlefields. For you, John, this resembles orchestrating a symphony: instead of rehearsing each instrument separately, the conductor addresses entire sections at once, ensuring efficiency, harmony, and power. In the same way, Mass turns thousands of isolated calculations into one cohesive performance.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Implementing Interactable Elements with Smart Objects in Unreal Engine

In Unreal Engine, creating believable worlds often requires more than movement and combat—it requires meaningful interactions between AI agents and the environment. This is where Smart Objects come into play. Introduced as part of Unreal’s Mass framework, Smart Objects provide a scalable, data-oriented way to make world elements interactable. By implementing Smart Objects, developers can allow AI and even players to engage dynamically with props, furniture, devices, or environmental features, enriching immersion and gameplay.

What Are Smart Objects?

A Smart Object is essentially a world element that has defined “uses” or “activities” associated with it. Unlike static props, Smart Objects can advertise their functionality to AI agents, who then choose whether and how to interact. For example, a bench might broadcast that it can be sat on, while a door can be opened, or a workstation used.

These interactions are not hardcoded into the object itself. Instead, Smart Objects use descriptors and behaviors that can be attached to different entities, keeping the system flexible and data-driven. This aligns with Unreal’s goal of building scalable simulations, especially in open-world or large-population games.

Core Components of Smart Objects

  1. Smart Object Definition – The data asset describing what interactions are available (e.g., “Sit,” “Open,” “Use”).
  2. Smart Object Component – Added to an in-world actor to designate it as a Smart Object, linking it to its definition.
  3. Claiming and Reservation – AI agents must “claim” an object before using it, ensuring multiple entities don’t overlap unrealistically.
  4. Behavior Integration – The interaction is usually tied to AI logic such as a Behavior Tree or State Tree, directing agents to approach, claim, and execute the interaction.

Setting Up Interactable Elements

To implement Smart Objects in Unreal, developers follow a structured workflow:

  1. Create a Smart Object Definition: Define the actions available (e.g., Sit, Use, Rest). This acts as the blueprint for interaction.
  2. Add a Smart Object Component: Attach this to a prop in the level, such as a chair or control panel. Link it to the definition so it advertises its functionality.
  3. Integrate with AI Logic: In a Behavior Tree, add tasks for “Find Smart Object” or “Use Smart Object.” The AI then queries available objects in range, claims one, and executes the interaction.
  4. Control Animation and Feedback: Link the action to animations, particle effects, or sounds. A sitting animation completes the illusion of a bench being used, for instance.

This process transforms static meshes into living parts of the environment.

Applications in Gameplay

Smart Objects enable a wide variety of gameplay enhancements:

  • Ambient Crowds: Populate cities with AI using benches, vending machines, or doorways.
  • Companions: Have allies interact contextually with the world—opening doors or resting when idle.
  • Dynamic Objectives: Make mission-critical elements, like terminals or levers, accessible via Smart Object interactions.
  • Procedural Simulation: Combine Smart Objects with Mass to simulate hundreds of agents using world features naturally.

Conclusion

Smart Objects allow Unreal Engine developers to implement scalable, flexible, and believable interactable elements. By separating definitions from world actors and integrating them with AI logic, developers can populate environments with meaningful actions rather than static scenery. For you, John, this is like transforming notes on a violin page into living music: the raw symbols (objects) gain depth and expression only when brought into interaction with the performer (the AI). Similarly, Smart Objects give game worlds resonance—turning background props into integral parts of the player’s experience.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Appendix – Understanding C++ in Unreal Engine

Unreal Engine is renowned for its flexibility, scalability, and power in building everything from small indie projects to massive AAA games. At the core of this flexibility lies C++, the programming language that underpins much of the engine’s functionality. While Unreal offers powerful visual scripting through Blueprints, understanding and leveraging C++ provides developers with deeper control, higher performance, and greater extensibility. This appendix serves as an overview of why C++ is important in Unreal, how it integrates with the engine, and where it fits in the developer’s workflow.

Why C++ Matters in Unreal

Unreal Engine itself is largely written in C++, and much of its power comes from exposing this language to developers. Blueprint scripting makes the engine accessible for rapid prototyping and visual logic, but C++ provides:

  • Performance: C++ runs at native machine speed, allowing optimized gameplay systems, physics, and rendering features.
  • Control: Developers can write custom classes, override engine functionality, and build systems that extend far beyond default tools.
  • Scalability: Large projects with hundreds of actors or complex simulations benefit from the efficiency and precision that C++ delivers.

In essence, while Blueprints are excellent for iteration and design, C++ is indispensable for building robust, efficient game foundations.

The Relationship Between Blueprints and C++

Unreal supports a hybrid workflow, where C++ and Blueprints complement each other. Typically, developers implement core systems in C++ (movement logic, AI frameworks, gameplay mechanics) and expose variables or functions to Blueprints for designers to tweak without writing code. This separation keeps projects organized: programmers maintain performance-heavy systems in C++, while designers use Blueprints for creativity and experimentation.

For example, a developer might create a C++ base class for an enemy character. The AI behaviors, health system, and attack routines are coded in C++, while variations—such as damage values or patrol paths—are adjusted in Blueprint subclasses.

Structure of C++ in Unreal

When working with Unreal’s C++ framework, developers use the following building blocks:

  1. UObjects – The base class for most Unreal objects, supporting memory management and reflection.
  2. AActors – Objects that exist in the world, such as characters, props, or cameras.
  3. UComponents – Modular pieces of functionality attached to Actors, like movement or rendering.
  4. Macros – Unreal’s macros (e.g., UCLASS, UPROPERTY, UFUNCTION) integrate C++ with the engine’s reflection system, making variables and functions available to Blueprints and the editor.

This structure enables seamless interaction between C++ code, Blueprints, and the editor interface.

Getting Started with C++ in Unreal

To begin, developers typically:

  1. Create a new C++ class in Unreal (e.g., deriving from Actor or Character).
  2. Add properties and functions with macros to expose them in Blueprints.
  3. Compile the code, which updates the engine to recognize the new class.
  4. Extend functionality either in C++ directly or by creating Blueprint subclasses for visual scripting.

This workflow gives the best of both worlds: the performance and control of C++ with the usability of Blueprints.

Conclusion

Understanding C++ in Unreal Engine is essential for harnessing the engine’s full potential. While Blueprints provide accessibility and speed, C++ underpins the systems that make complex, large-scale projects possible. For you, John, this is like learning advanced violin technique: while simple pieces can be played with basic methods, mastery requires deeper control of bowing, fingering, and tone production. In the same way, mastering C++ transforms Unreal from a creative sandbox into a powerful professional tool—capable of producing performances, or in this case, games, of the highest caliber.

 

 

No comments:

MY_MEDIEVAL_ERA_HIS STORY_HOMEWORK

  THE MEDIEVAL ERA   Here are some questions and answers based on the information provided about the medieval era:     1. Politica...

POPULAR POSTS