The current technical feature of behavior modeling, Finite State Machine (FSM), in human behavior simulation represents errorless, the mutual interaction of agents. However, it is a limit to generating emerging, varying behavioral responsiveness. As a new approach to enhancing a responsive behavior model, this project developed a reinforcement learning (RL) agent. The RL agent is an artificial intelligent creature that learns the appropriate behavior by itself in a virtually constructed environment. This behavior model allows computing physics-based motor skills and generating unstructured behaviors with the precise parameterization of the human and environmental factors. Therefore, it enables the agent to respond to subtle variations in the given environmental conditions. This project applies the RL-powered behavior model in children's play behaviors to assess children’s physical and social development relevant to play behaviors and fatal injury for measuring learning environment design performance. I applied the developed behavior model in the agent-based simulation. Conducting experimentation in the empirical learning environment design, this project confirms that the agent model successfully adapts to the given environmental settings and generates more localized and subtle behavior responses.
pre-made animation data-based behavior transition(FSM)
RL agent's behavior transition
V(a) = V(a) + α × (r-V(a))
The equation V(a) = V(a) + α × (r - V(a)) shows how an agent learns from its environment, updating its value based on rewards and a learning rate (α). Compared to using pre-made animation data, this makes an RL agent implement a flexible and adaptive choice for actions, responding to environment variations.
Step 1: Converting to an agent with 16 segmented joints to control each physical parameter for various play types.
Step 2: Conducting test iterations exploring ergonomic, and cognitive responses to implement physical and social play behaviors.
Step 3: Training!
Algorithm 1: RewardsFunctionForTraining
a = ActionBuffer.ContinuousActions
reward = 0
function OnActionReceived():
for each bodyPart in bodyParts:
bodyPart.JointRotation(a[i++], a[i++], a[i++])
float CalculateRewardsForPhysicalPlay():
if bodyParts[foot].touchingSlopes:
reward = 1
else:
reward = -1
return reward
float CalculateRewardsForSocialPlay():
prameters.TeamId == Team.AgentA:
if Collision.Ball.BodyParts[foot]
reward = 0.2
if Collision.Ball.CompareTag("Goal”):
reward = 1
return reward
Last step: Implementation test - integrating behaviors into the simulation with analytics for measuring learning (supportability for physical and social play) and safety matters.
Epilogue: the very initial process for deducing children's play behaviors by making pre-made animation clips...please watch it just for fun!
© 2023. Jin Lee. all rights reserved.