Pallas Athena

Making Interactive SVG with Behavior Trees

In this Post ...

I'll explore working with Behavior Trees to create interactive SVG for games, simulations, immersive experiences for training and much, much more.

Introduction

Probably some of my favorite projects which I worked on back in the 'teens involved creating interactive training sims. Passionate about digital art and programming, these projects afforded me the opportunity to apply years of work exploring 3D graphics and artificial intelligence to create interactive and immersive educational experiences. I was reminded of this experience recently expanding the framework around the SVG Creators Collaborative™ to develop interactive SVG. The work I was doing involved the application of Behavior Trees to define intelligent behavior for non-player characters (NPC's) in training simulations.

Behavior Trees (BT's) are a type of AI control system used in video games, robotics, business process orchestration -- the list goes on. Originally created to manage the decision-making process of video game NPC's, BT's provide a hierarchical and modular way to structure behavior allowing for complex and dynamic interactions among characters and objects. Today, behavior trees remain an important part of the portfolio of techniques that can be applied to the development of intelligent systems.

Behavior Trees 101

Behavior Trees are a great fit when you're not overly concerned with complex AI but do need structured, readable logic. In a nutshell BT's are comprised of:

  1. Actions and Conditions (the leaves in the tree),
  2. Sequence nodes and Selectors
  3. The BT Root.

Action nodes define the actual behaviors. Conditions embody logic to enable decision making behavior on your agents. Sequences and selectors are control nodes in the system and function as logical "gates". The root is the entry point for evaluation of the tree.

Example: Clearing a Building

Let's consider an example. Here's a Behavior Tree that defines a series of actions and conditions for an action sim.

ROOT: (Clear Building)
    |
    |-- SEQUENCE
            |
            |-- ACTION: seek target
            |
            |-- SELECT
                    |
                    |-- SEQUENCE
                    |       |
                    |       |-- CONDITION: if accessible
                    |       |
                    |       |-- ACTION: gain entry
                    |
                    |-- SEQUENCE
                            |
                            |-- ACTION: target structure
                            |
                            |-- ACTION: explode wall
Figure 1. A Behavior Tree (BT) containing nodes with actions and conditions associated with agents in the sim.

Analysis

A Behavior tree is invoked at specific intervals, or, ticks which occur over the course of a simulation. Traversal starts at the ROOT, which is the entry-point for the decision-making process, and proceeds depth-wise. Over the course of the traversal each node returns a status (SUCCESS, FAILURE, or IN PROGRESS) which determines the overall result of the traversal process.

In this example, the root's first child is a SEQUENCE node.

Sequence Nodes

SEQUENCE nodes execute their children in order evaluating the status of each node's execution along the way. On SUCCESS the sequence proceeds to the next node. On FAILURE the sequence is "short-circuited" and subsequent child nodes are not traversed. If a child returns IN PROGRESS the sequence stops there and resumes on the next tick. Contrast that behavior with SELECT.

Select Nodes

SELECT nodes also execute their children in order but differ with regard to the impact of the result-status on the traversal. If a child of the SELECT returns SUCCESS (meaning its operation completes successfully) no subsequent children are traversed and the SELECT node itself returns SUCCESS. If on the other hand a child returns FAILURE, the SELECT proceeds to the next child it contains. Finally, if a child returns IN PROGRESS the implication is that it's operation has not completed and SELECT halts and resumes at that node on the next tick.

The Story so Far

In sum, sequence nodes and select's define the control-flow logic of the BT. Use Sequences to define a set of actions and conditions that you want to execute in series. Selector nodes behavior to be dynamically selected from a set of choices. It's well worth noting that selectors imply priority. They enable you to define a behavior but provide fallbacks should higher priority items fail on any given time slice.

Example Source Code

In this section I'll show how the concepts I've just laid out translate into source code. The following listing illustrates Behavior Tree node implementations using ES 6 1 .

The Traversal Framework

First we define the Behavior Tree statuses as constants.

export const btConstants = {
    SUCCESS : "1",
    FAILURE: "-1",
    IN_PROGRESS : "0",
}

Next we define a base class in which we specify a tick method. This is the method that gets called to traverse the tree. The parameter for this method is a reference to the blackboard. The blackboard is a data structure that can be used to read and write state across ticks and agent specific BT's. In the SVG Creative Collab framework it is preferred to use the blackboard sparingly (as you'll see shortly).

export class BTNode {
    tick(blackboard) { 
        return btConstants.SUCCESS; 
    }
}

Next we define the Root class ...

export class Root extends BTNode {
    constructor(children) {
        super();
        this.children = children;
    }

    tick(bb) {
        let anyRunning = false;
        let anyFailure = false;

        for (let child of this.children) {
            const status = child.tick(bb);

            if (status === btConstants.IN_PROGRESS) {
                anyRunning = true;
            } else if (status === btConstants.FAILURE) {
                anyFailure = true;
            }
        }

        if (anyRunning) return btConstants.IN_PROGRESS;
        if (anyFailure) return btConstants.FAILURE;
        return btConstants.SUCCESS;
    }
}

And classes implementing the sequence and selector node types which control the traversal as described above.

export class Sequence extends BTNode {
    constructor(children) {
        super();
        this.children = children;
        this.current = 0;
    }
    tick(bb) {
        while (this.current < this.children.length) {
            let status = this.children[this.current].tick(bb);
            if (status === btConstants.IN_PROGRESS || status === btConstants.FAILURE) {
                this.current = 0;
                return status;
            }
            this.current++;
        }
        this.current = 0;
        return btConstants.SUCCESS;
    }
}

class Selector extends BTNode {
    constructor( children ) {
        super();
        this.children = children;
    }
    tick(bb) {
        for (let child of this.children) {
            let status = child.tick(bb);
            if (status !== btConstants.FAILURE) {
                return status; 
            }
        }
        return btConstants.FAILURE;
    }
}

Finally we define the leaf node implementations; Action and Condition.

export class Condition extends BTNode {
    constructor(fn) {
        super();
        this.fn = fn;
    }
    tick( bb ) {
        const result = this.fn( bb );
        return result ;
    }
}

export class Action extends BTNode {

    constructor(fn) { 
        super(); 
        this.fn = fn; 
    }

    tick( bb ) { 
        return this.fn( bb ); 
    }
}

The implementation is minimal by design -- this makes the Behavior Tree highly flexible. The node doesn't implement the behavior itself. Instead, you just inject a function (fn) at construction. Actions and conditions are simply lambdas which associate behavors with implementing agents as we'll see presently.

The Behavior Tree Factory Pattern

Given the framework, we can readily construct new behavior trees using a conveninent factory pattern. Here I'll implement a behavior tree for the high level example I sketched out above.

export function makeTankBT() {
    return new Root([
        new Sequence([
            new Action(bb => {
                const status = bb.sprite.seek() ;
                return status;
            }),
            new Selector([
                new Sequence([
                    new Condition( bb => {
                        if( bb.accessable) {
                            l( "Target accessable" );
                            return btConstants.SUCCESS;
                        }
                        return btConstants.FAILURE;
                    }),
                    new Action( bb => {
                        l( "SECURE TARGET!" );
                        return btConstants.SUCCESS;
                    })
                ]),
                new Sequence([
                    new Action(bb => {
                        l( "AIM!" );
                        const status = bb.sprite.targetForDestruction();
                        return status;
                    }),
                    new Action(bb => {
                        l( "FIRE!" );
                        const status = bb.sprite.fire();
                        bb.accessable=true;
                        return status;
                    })
                ]),
            ]),

        ]),
    ]);
}

I like this pattern because the relationship between a high-level sketch like the one in Figure 1 and the actual BT implementation is quite transparent.

Demo: Behavior Trees in Action

After laying all that out I thought it'd be nice to see the BT in action. To that end I've inlined a working model below.

INSERT FPS
Demo: Automating sprite behavior with Behavior trees.

Discussion and Key Takeaways

Having discussed the what of Behavior trees I think we're ready to discuss the philosophy behind their application to create interactive SVG artworks using the SVG Creators' Collab™ framework. Given the recent explosion of AI technologies its important to develop an understanding of how and where specific technologies fit into the development of intelligent systems.

Over the course of developing intelligent systems architects and engineers are daily faced with countless decisions. Where does this particular slice of state belong, who's responsible for controlling that action? Those sorts of things among others.

To me one of the most important aspects of Behavior Trees is that they provide a formal, modular framework for defining and orchestrating the behaviors of intelligent systems. BT's can be used "top down", incrementally and iteratively, as much to guide the specification of implementation details for intelligent systems as to implement and enforce behavioral constraints.

As a "rule of thumb" I like to think of Behavior Trees as defining what intelligent actors or agents should be doing within the constraints of a system. But the how of what to do should be defined on the agents themselves. Consider, for example, game sprites in an educational simulation. In such scenarios I treat state as belonging on the sprite itself as intelligent agent. Sprites should have their own "mind". They need to "know" how to do stuff; how to move around in their environment, how to interact with objects and other agents in their "world". The Behavior Tree with it's black-board memory is kinda like a "collective unconscious" in that regard. It lays out the script of intentions (seek, pick a target, shout, fire) which drive the agents to execute their known behaviors. The BT just orchestrates decision-making flow. The state and execution details of the behaviors live in the actors.

Bottom line?

  • BT = desire, collective unconscious, "what to do."
  • Sprite = will, working memory, "how to do it."

The beauty of this approach is that the BT stays stateless (simpler, easier to debug). All persistence (targets, cool-down timers, en route flags) lives in sprites -- which already have to track all that stuff anyway. The black-board simply offers a means to enable inter-agent communication. Offloading state to the sprite implies design constraints that give rise to better BT designs. And that's the most crucial part in working with Behavioral Trees; good design.

Behavior Trees + State Machines: Best of Both Worlds

At this point I feel the need to make explicit that -- more often than not -- the conditions and actions defined in terms of Behavior Trees aren't just "black box" one-shot commands. From the outset, in choosing to work with BT's, you'll likely encounter tension revolving around how you'll be dealing with state. The blackboard offers one possible mechanism to manage state across slices of time. But in keeping with the guiding principles outlined above I offer another approach here.

I've long been a fan of another form of AI -- namely Finite State Machines, or, FSM's 2. Often I've seen discussions around AI incorrectly framed as either/or propositions. "we have to decide whether to use BT's or FSM's. As if they are somehow mutually exclusive. Instead, I view both systems as working beautifully in tandem to orchestrate behavior in many different classes of intelligent systems. The example I used in this article -- the tank-bot -- uses both a behavior tree and state machine features.

Diagram illstrating a hybrid BT with agent FSM's approach to creating intelligent agents in interactive SVG
Figure 2. Diagram illstrating a hybrid BT + FSM. Design used here by N.

This hybrid approach lets the Behavior Tree handle the what ("find your target", "fire your missile", "clear the building") while the low-level details ("pre-flight setup", "in-flight course correction", "arrival at destination") are handled in "mini state machines" defined over the attributes and functions associated with specific sprites. The sprite as FSM associated with each action gives the agent a personal memory of progress, so actions can span multiple ticks without getting reset every frame.

The end result is that my sprites feel like they have both:

  • A higher-level "brain" (the BT deciding intent).

  • A lower-level practical pilot's checklist (the FSM) executing the details.

This keeps the Behavior Tree readable and compact, while letting individual actions stay precise and persistent. It's an awesome pattern I'll be using it ubiquitously throughout the SVG Creator's framework ™.

A Psychological Framework for Design

With that in mind I offer the following rules of thumb. In designing behavior trees ...

  1. Let the BT be the unconscious; the sprite the state of mind.

    • The BT describes what should be attempted.
    • The sprite owns how to carry it out. It keeps track of its own continuity.
  2. BT nodes express intentions, not state.

    • A node like "pick a target" doesn't store the target -- it directs the sprite to ensure one exists.
    • If the sprite already has a target, the node should succeed immediately without resetting.
  3. Continuity belongs to the sprite.

    • Actions that span multiple ticks (e.g., seek) should live as methods on the sprite.
    • The method returns IN_PROGRESS while en route, SUCCESS when complete, or FAILURE if it gets blocked.
    • This keeps the BT simple and avoids "thrashing" between nodes.
  4. Failure should be meaningful.

    • A node should only return FAILURE when it's truly impossible to proceed (e.g., no targets exist).
    • Otherwise, prefer IN_PROGRESS or SUCCESS to avoid unnecessary resets.
  5. Idempotence is key.

    • Nodes may be ticked many times across behavioral execution.
    • Sprite methods must be written so that calling them repeatedly produces consistent results. Example: flyToTarget means "keep flying if not there yet" not "restart the flight sequence". It should be the intelligent agent's responsibility to keep track and know where it is -- not some central authority micro managing the process.
  6. The BT is declarative, not imperative.

    • Think of the tree as an expression of desire: "I want to have a target, fly close, then fire."
    • The sprite, like a psyche, resolves the details in its own experiential continuity.

These rules make for a set of beautiful design constraints that ensure BT's will be built cleanly and effectively. Disciplined development is not a drawback it's a form of insurance. The time and effort you put in up front is more predictable and pays off in fewer unexpected developments in the long run.

Well that's all I got for now folks. Use these guidelines in good health!

End Notes

  1. I should point out that the implementation examples I've provided for the BT framework differ in one key aspect from classic BT implementations -- namely the behavior of sequences and selects. Given the hybrid approach using BT's to manage more intelligent (stateful) actors (preferred by the SVG Creators' Collaborative), sequences and select nodes to not retain state and resume in place for running operations across ticks. Instead, we assume behaviors under BT scope to be idempotent with agents responsible for reporting back statuses for behaviors associated with BT actions and conditions .

  2. FSM's are another blog-post / chapter in their own right. I'll have to consider adding one at some point. But at this point the concept is so entwined with the design philosophy I'm laying out here I at least had to address them at a high level. .

Appendix 1: Bonus Content -- A Deep Dive into Steering Behaviors with Vectors

I've long been a fan of Craig Reynolds. You know -- the guy who pioneered concepts revolving around "artificial life". In a famous SIGGRAPH paper Mr. Reynolds outlined numerous concepts revolving around moving autonomous characters for interactive graphics, computer games, and cinematographic efforts. I've implemented these ideas in several contexts (including the tank-bot for this post) and feel that revisiting the concepts and underlying vector math is worthwhile.

In implementing the tank-bot sprite I wanted to achieve a clean separation between the steering behavior as defined declaratively for integration with the Behavior Tree and the vector math underlying the tank-bot movement logic in order to achieve better encapsulation, readability and re-usability. At the same time I needed to keep things clean, usable and avoid over-engineering.

Here's the sweet spot on which I landed. First the seek behavior definition.

/**
 * Defines the seek behavior for this sprite's behavior tree ... 
 */
seek () {
    const seekStates = tankConstants.seekStates;
    // GET WAY POINT:
    const wayPoint = this.wayPoints[ this.currentWayPoint ];
    switch ( this.seekState ) {
        case seekStates.PRE_FLIGHT: 
            // SET COURSE for the first waypoint
            this.setCourse( wayPoint );
            this.seekState = tankConstants.seekStates.IN_FLIGHT;
            return btConstants.IN_PROGRESS;
        case seekStates.IN_FLIGHT: 
            // stay on track. update velocity on each tick 'till your 
            // reach the next waypoint...
            this.setCourse( wayPoint );
            // CHECK: ARE WE THERE YET?
            let myBox = { 
                x: this.pos.x - 10, 
                y: this.pos.y - 10 , 
                width: 20, 
                height: 20
            } ;
            let wayPointBox = { 
                x: wayPoint.x - 10,
                y: wayPoint.y - 10, 
                width: 20, 
                height: 20,
            } ;
            let colliding = collisionDetection( myBox, wayPointBox ); 
            if( colliding ) {
                // GET THE NEXT WAYPOINT
                this.currentWayPoint ++ ;
                if( this.currentWayPoint < this.wayPoints.length ) {
                    // const nextPoint = this.wayPoints[ this.currentWayPoint ] ;
                    // this.vel = setCourse( this.pos, nextPoint );
                    return btConstants.IN_PROGRESS;
                } else {
                    this.vel = Vector2D.fromCartesian( 0, 0 );
                    this.seekState = tankConstants.seekStates.ARRIVED;
                    return btConstants.SUCCESS;
                }
            }
            return btConstants.IN_PROGRESS;
        default : 
            this.seekState = tankConstants.seekStates.ARRIVED;
            return btConstants.SUCCESS;
    }
}

And the helper function.

/**
 * Helper function to update the tank sprite's course encapsulating 
 * Craign Reynold's steering behavior. 
 * 
 * @param {Point} target  a waypoint or other coords of form {x: val, y: val}
 * @param {number} speed  as a scalar
 * @param {number} turnRate to step the turn behavior... 
 */
setCourse( target, speed = 10, turnRate = 1 ) {
    const deltaVel = Vector2D.getDifferenceVector( target, this.pos );
    const courseIdeal = Vector2D.getNormalized( deltaVel).multiply( speed ) ;
    // steering = ideal new course - current velocity 
    const steering = Vector2D.getDifferenceVector(courseIdeal, this.vel); 
    steering.multiply( turnRate );
    // apply steering gradually
    this.vel.add(steering);
    // normalize to keep constant speed
    this.vel = Vector2D.getNormalized(this.vel).multiply(speed);
}

Let's zoom in on the steering behavior. With the clean separation achieved here, all the steering logic is centralized in setCourse. seek defines a "mini finite state machine" that triggers it. The secret sauce behind smoother steering behavior is the use of vectors. In a nutshell:

  1. Find the difference vector to get you from your current position to your target. I think of this as the ideal course .

  2. Adjust your current velocity (speed and direction) to get there. To do that compute steering. Take the difference vector between your ideal course and your current velocity.

  3. Finally, If you don't want to instantaneously snap to your new course (which defies real-world physics and is quite jarring) throttle your course change; multiply the steering vector by an adjustment factor (the turning rate).

Notice also that we normalize the new heading (the sprite's velocity) and multiply to enforce a constant speed.

The following diagram illustrates the vector based steering approach.

Diagram illstrating target position, agent position, agent velocity, and steering vectors in 2D space
Figure 3. This diagram is a graphical illustration of the target-position, agent-position, agent-velocity, and steering vectors in 2D space. The ideal course is the *difference vector* between the target - agent. The ideal course isn't set instantaneously. Instead a steering vector is computed using *another* difference vector (that between the ideal course and agent's *velocity*. That difference vector is scaled down and added to the current velocity to effect a *gradual course change*.

And now, for the truly intrepid, here's the math. This covers the ideal heading, steering, and the velocity update.

Step 1: Get the ideal heading as a difference vector between the target and your current position. Also, normalize that vector (since what you really need to do is just change direction -- not speed:

$$ \vec{v}_{ideal} = \vec{v}_{target} - \vec{v}_{position} $$

Step 2: Next, get steering as the difference vector between the ideal from step 1 and your current heading (i.e., velocity):

$$ \vec{v}_{steering} = \vec{v}_{ideal} - \vec{v}_{velocity} $$

Step 3: Finally, update your velocity by steering toward your ideal. Again, throttle your adjustment by some amount ($\alpha$) to avoid instantaneous "snap".

$$ \vec{v}_{\text{new velocity}} = \vec{v}_{\text{current velocity}} + \alpha \cdot \vec{v}_{steering} $$

Also, don't forget to normalize your velocity to maintain speed:

\[ \vec{v}_{\text{new velocity}} \leftarrow \frac{ \vec{v}_{\text{new velocity}} }{ \lVert \vec{v}_{\text{new velocity}} \rVert } \cdot \text{speed} \]

And there you have it; steering behavior in three easy steps!

Appendix 2: A Vector Implementation

As yet another bonus I'm including a vanilla javascript Vector implementation (which I wrote a loooong time ago). It's served me well over the years so I'm including it here. It's worth giving it a look see -- if for no other reason than to refresh your vector math. But it can also be used as is as a drop-in for your vector arithmetic needs.

Pro Tip

Vectors are at the heart of thinking about moving intelligent agents all over the place!

export class Vector2D {

    /**
     * Construct a new vector using either polar or cartesian static initializers ...
     * @param {*} x 
     * @param {*} y 
     */
    constructor( x, y ) {
        this.x = x; 
        this.y = y;

        // ---- BIND METHODS --------
        this.setPolarCoords = this.setPolarCoords.bind(this);
        this.getPolarCoords = this.getPolarCoords.bind(this);
        this.setCartesian   = this.setCartesian.bind(this);
        this.getCartesian   = this.getCartesian.bind(this);
        this.add            = this.add.bind(this);
        this.getDistance    = this.getDistance.bind(this);
        this.multiply       = this.multiply.bind(this);
    }

    /**
     * Factory method to get a Vector2D given speed and direction
     * as defined below:
     * 
     * @param {scalar number} r   speed
     * @param {scalar numer} theta direction in RADIANS
     * @returns a vector of form [x, y]
     */
    static fromPolar(r, theta) {
        const x = r * Math.cos(theta);
        const y = r * Math.sin(theta);
        return new Vector2D(x, y);
    }

    static fromCartesian(x, y) {
        return new Vector2D(x, y);
    }

    /**
     * Static utility to convert degrees to radians...
     * @param { float } degrees 
     * @returns radians
     */
    static degreesToRadians(degrees) {
        return degrees * (Math.PI / 180);
    }

    /**
     * Static utility to convert radians to degrees...
     * @param { float } radians 
     * @returns degrees
     */
    static radiansToDegrees(radians) {
        return radians * (180 / Math.PI);
    }

    static getDifferenceVector( v1, v2 ) {
        const x = v1.x - v2.x;
        const y = v1.y - v2.y;
        const vDiff = Vector2D.fromCartesian(x, y);
        return vDiff;
    }

    /**
     * Given a vector, v, returns a new normalized vector 
     * (i.e., a vector of unit length with orientation of
     * input vector). 
     * 
     * The special case of the zero vector input returns 
     * a zero vector right back since 0 vector has no 
     * orientation. Client code should accomodate the 
     * special case as necessary.
     * 
     * @param {*} v 
     * @returns 
     */
    static getNormalized( v ) {
        const len = v.length();
        if( len === 0 ) {
            return Vector2D.fromCartesian(0, 0);
        }
        const magnitude  = v.length();
        const x = v.x / magnitude;
        const y = v.y / magnitude;
        const normalized = Vector2D.fromCartesian(x, y);
        return normalized;
    }

    /**
     * Update the vector in place using polar coordinates.
     * 
     * @param {number} r - The new magnitude of the vector.
     * @param {number} theta - The new angle (in radians) of the vector.
     */
    setPolarCoords(r, theta) {
        this.x = r * Math.cos(theta);
        this.y = r * Math.sin(theta);
    }


    /**
     * Get the vector's state in polar coordinates.
     * 
     * @returns {Object} A plain object with `r` (magnitude) and `theta` (angle in radians).
     */
    getPolarCoords() {
        const r     = this.length();
        const theta = Math.atan2(this.y, this.x); 
        return { r, theta };
    }


    /**
     * Set this vector's, cartesian coordinates
     * @param {number} x x coord
     * @param {number} y y coord
     */
    setCartesian(x, y) {
        this.x = x;
        this.y = y;
    }

    /**
     * Read the cartesian coordinates
     * @param {number} x x-coord
     * @param {number} y y-coord
     * @returns {Object} { x:number, y:number }
     */
    getCartesian(x, y) {
        return ({
            x: this.x ,
            y: this.y = y
        });
    }


    /**
     * Add a given vector to *this* vector
     * 
     * @param {
     * } v2D 
     */
    add( v2D ) {
        this.x += v2D.x;
        this.y += v2D.y;
        return this;
    }

    multiply( scalar ) {
        this.x *= scalar;
        this.y *= scalar;
        return this;
    }


    getDistance( location ) {
        // expect a Vector2D ...
        const {x, y} = location; 
        const d = Math.sqrt( (x-this.x)**2 + (y-this.y)**2 );
        return d;
    }

    /**
     * get the length of the vector
     */
    length() {
        const l = Math.sqrt( this.x*this.x + this.y*this.y ) ;
        return l;
    }

    /**
     * Get a string representation of the vector
     * 
     * @returns a string representation of the vector
     */
    toString() {
        return "[ " + this.x + ", " + this.y + " ]";
    } 

    getClone() {
        return Vector2D.fromCartesian(
            this.x,  this.y
        );
    }

    /**
     * Get the vector orientation using atan2 to avoid
     * quadrant confusion. 
     * 
     * @returns a scalar value in degrees from -pi (-180) to pi (180 degrees)
     */
    getOrientation() {
        const rad = Math.atan2( this.y, this.x );
        return 180/Math.PI * rad;
    }

    /**
     * Rotate this vector around the origin by a given angle (degrees).
     * COUNTER clockwise, in-place.
     * @param {number} degrees - Rotation angle in [0,360).
     */
    rotate(degrees) {
        // Normalize angle to [0, 360)
        const angle = (degrees % 360 + 360) % 360;
        const radians = angle * Math.PI / 180;

        const cos = Math.cos(radians);
        const sin = Math.sin(radians);

        // COUNTER clockwise rotation: flip sign on sin for clockwise...
        const newX = this.x * cos - this.y * sin;
        const newY = this.x * sin + this.y * cos;

        this.x = newX;
        this.y = newY;

        // allow chaining
        return this; 
    }

}

Resources

  1. Introduction to behavior trees

  2. Orchestrating LLM Agents with Behavior Trees: A Practical Guide

  3. Steering Behaviors For Autonomous Characters.