Impulsing: #devtober Day 9

Quick update today…

I got the Soojin materials labelled in code (not sure why they’re always somewhat scrambled). I also have the six values from the lower level node being reflected in the shader materials for the UI-level component. There are six more materials to populate, but that requires exposing them from the lower level node. I think the values currently being used aren’t quite right, either. (They have calculated values, but those values need to be split. Shouldn’t be hard to do.) I’ll have to work that out tomorrow, but that should be fun to get it all working.

Day 9: TGIF.

Impulsing: #devtober Day 8

Today didn’t achieve much, but I did manage to get a couple of small things done.

  1. I exported the new Soojin Blender model as .fbx and then dropped it into Godot. Nothing too special there.
  2. Got the Soojin object to respond to a mouse drag so it can be moved around. This ability is part of what is driving the model redesign – the new functionality makes it similar to another existing component, so I’m changing the look to be more similar (same family, so to speak). Perhaps down the road, even more behavior will unify – or even code.

That’s it. Not a lot, but taking a small step each day means you’re at least moving forward. It might not be speedy progress, but it’s more progress than none, and a little each day can add up a quite a bit.

They say that seven eight nine. I guess we’ll find out to tomorrow.

Impulsing: #devtober Day 7

I have enjoyed working with shaders in Godot. However, I have found I don’t know all the ins and outs of them. That is to be expected when you’re learning something, but it can be frustrating when things start to go wrong in ways that it’s hard to know how to even begin to attack.

I have been living with this odd shader behavior where one component starts to behave erratically when there are a number of pulses on screen. I happened to be reading a bit about the Godot renderer today, and it was talking about how shaders that use alpha are processed differently. This particular component had two shaders with alpha (a nested situation). I finally had an idea to try to solve the problem.

I ended up getting rid of alpha for one shader and repositioning things a bit. That either solved the problem or avoided it such that I don’t see it anymore (“solved”). Either way, I’m happy for now.

But I have a feeling there will be more like this eventually as I explore the boundaries of both my creativity and the engine.

Beyond that, I tweaked the Soojin model some more, after a walk where I contemplated the design a bit more. It may not be perfect this way, but it will be a good start. Then I can see how well it works. Refine or redesign from there.

Day 7 – almost a quarter of the month done. It has been fun so far.

Impulsing: #devtober Day 6

In the end, I’m glad I didn’t get much done yesterday. I’m not sure why I felt like crap, but taking another day gave me a better idea of how to proceed. Tonight, I got some of that going, with a new Blender model.

The colors are just so I can know where the materials are defined, though I think it’s a cool look. I’m not totally insane, though.

I integrated it into the game, except for the new material work. That will come tomorrow. I think it’s going to work nicely, in the end.

Six days down. I didn’t create the Universe, and I’m not planning to rest any time soon, but it has been some nice progress. And sometimes you have to make a brief stop to go even further.

Impulsing: #devtober Day 4

Not so rainy today, but I still got a bit done. I TDD’d the new “Soojin” node. It ended up being a bit more complicated than I thought, in terms of various cases I had originally not considered, but which became clear once I started pulling it together.

I ended up in an interesting place.

Getting the component to work and seeing it in action, I began to realize what it actually was as opposed to what I originally intended it to be. I find this to be a more common occurrence than I would have imagined. And it’s part of the fun in this creative endeavor. I used to have the same experience when writing – sometimes the characters and plot would evolve as I wrote it in ways I hadn’t anticipated. It’s a joy when that happens.

I had a very specific use for this node when I sat down to create it. In the 2D game there was this component containing two nodes:

The purpose of the component was to act as a sort of “target”. Once triggered, it would remain going, enabling further sections of the graph. As I moved to 3D, I have been trying to revisit these sort of two-node components, to see if I can simplify them. Some components will need to be directional; those will come later. I have been able to reimagine these components as single node components. What that means is figuring out what is fundamental about them and then creating one or more nodes that either implement that functionality or that can be combined to do so.

What I have found is that when recreating these as single nodes, it often happens that I can implement behavior that suffices for what they used to be, but the simplification often ends up bringing it with new uses and behaviors that (ironically) the more bulky component didn’t have, since the original component was more targeted and specific. By extracting out the essence, it ends up with more and varied behavior. I don’t know if that makes sense. Perhaps someday I’ll be able to be more specific.

At one point today, I took a walk just to get away from the computer, and I began imagining different setups with this new component, and it suddenly dawned on me what it actually was. It wasn’t what I had originally thought it was (or what I had named it – “Soojin” was more a “Mary”. Don’t think about that too much. I doubt you’ll be able to attribute any sense to it. It’s more arbitrary than you might think). Apart from it being quite exciting to find new uses (and new puzzle potentials), it also means I’m going to have to redesign its look. It’s important that the components provide good feedback about their state, so players know what’s going on. The current component design doesn’t show enough. There is more going on under the surface that needs to be made surface.

Stay tuned for a new look. And for another #devtober day.

Day four, not looking forward to Monday tomorrow. All that time spent working when I could be doing this.

Impulsing: #devtober Day 3

Today was a rainy Saturday – which, once I had some caffeine to overcome the gloom, was a perfect opportunity to spend a number of hours working on Impulsing.

I started out in the morning looking back at the 2D version, as I had completed a number of levels there, some with components I don’t have yet in 3D. I decided that in order to get those levels up and running in 3D, I need only a couple more major components. So I decided to work on one of those today.

I came up with a simple design for it, visually, and got it up and worked out in Blender. It changed a bit as I worked on it. Sometimes you gain inspiration as go along. Also, what looks good in Blender – where you can pan, zoom and rotate – sometimes doesn’t end up looking good in Godot once you look at it in 2.5D at a fixed angle. You lose some of the visual cues that tell you what it is. Here is the new component, “Soojin”. (Code names abound on this blog.)

I got far enough as to get a new component created and integrated into the palette, along with a bare bones underlying node that doesn’t do what I want yet but gives me a place to “make it happen”.

Tomorrow, it will be “TDD time” to get the functionality in place. I thought it might be straightforward enough that I was tempted to just implement it. But then I realized that the behavior was more complex than I first believed. I need to think about what I really want it to do before I try to code it.

Despite having been a software developer for a long time, I have found game programming to be different in very striking ways. You would think it’s all just programming, but it becomes so much more. It might be because I’m actually designing the game as well, but I think it’s also more than that.

If you’re writing, for example, a paint program, as long as people can get done what they want to do in the software, it doesn’t really matter how you do it. You can even have a completely ugly but functional application. The important thing is the ability for people to use it as a tool.

When writing a game, though, you’re actually creating an experience. Just a bare-bones, ugly, functional game would work, but it wouldn’t do well. That’s not what people want from games generally. You need to not only consider what people will do but also how they do it and what they see and what they hear and how do they know what is going on and – most critically – does everything behave in a reasonable way?

My understanding of the elements of my own game has evolved over time and continues to do so. The more you dive into the details, the more you see. And what you have created – if you care about what the player experience is – constrains what you can create, because as you work out the rules and behaviors, the next set of rules and behaviors has to fit in with what you have created so far. You can’t just hack things on. You have to keep refining what you’re doing to avoid ending up with an unholy mess.

Sometimes, the design experience is revelatory. I didn’t know even all of what I have created in the game so far when I started this. The game itself presented issues that became design decisions as I solved them. Sometimes that was the most exciting part, when you come up with that answer to a problem and it’s not just a hack but something that emerged from what you’ve been working on in a natural way. Part of the work of design is seeing what the game is telling you and forcing you to face.

What’s interesting about that (and possibly sad) is that when the design decisions are so natural, nobody but you may ever realize what it took to arrive at them.

Day 3, over and out.

Impulsing: #devtober Day 2

No fancy graphics today.

I spent the evening getting the “Owen” node hooked up to the higher-level “Owen” component. I decided to have this split from the beginning between the underlying graph nodes and the higher-level UI/game components. It has served me well so far, especially since a serialized Godot component is a huge mess. It’s amazing how much “stuff” it writes out.

Once I got the node hooked up and loading from the palette, I found some issues I had to fix, but it all went smoothly.

I have a Trello board with next steps. I need to add some more, as I know there is more than three things left to do. One will be to create some new levels with this component and make sure it serialized properly. I haven’t tried that yet. At least I have a plan for tomorrow. (Saturday! Yay!)

Hey, look! There was a graphic after all. That’s the editor component palette so far.

Impulsing: #devtober Day 1

I decided to give the #devtober game jam on itch.io a try this year. It popped into my inbox, and it sounded like it might be a good experience. You can read more about it here: https://itch.io/jam/devtober-2020. Basically, you just work on your game each day of October and blog about it. Then in the end you write a post-mortem. I have been working on it almost every day anyway, but this makes me write about it, too, and maybe some people will read along.

This is day 1.

A month to go.

Given that this is the first real Impulsing posting, I probably should give a little information about it. I would love to post images, talk all about it, and get everyone excited (or bored) by what I’m doing. However, part of the game is figuring out what it all means, and talking about it too much could spoil that sense of discovery. So I need to be a bit guarded.

Impulsing is a puzzle game I have been creating off and on for over two years. Probably closer to three now. Part of that length of time is that I’ve been working only on it in my spare time. Another part of it is that I have been working on another project as well at the same time that has been taking priority. Another part is that I actually gave up on it for a while before a mental “eureka” moment happened, which provided a breakthrough to a tough design problem and got me back into it with real energy.

It went through a number of (crude horrible) prototypes in HTML and JavaScript before I began to cause my CPU usage to go through the roof. Time to get more real.

I dabbled in Unity for a while before my struggles to do simple things put me off it. I am now using Godot, and I’m much happier with it, though it too has quirks that drive me bonkers at times. (One example: I’m using resources to read and write levels, and there have been times where the game crashing has trashed the current level resource file. And the Godot editor/system seems to still have it open – if I restore it from git, it just overwrites it again whenever I do anything. I have to shut down the Godot editor before I can restore the file.) However, there is a lot I really like about Godot, like its nested scene/component model.

The game started off life in 2D, and I sent around a small prototype to some people I know. For various reasons, I decided to move to 3D.

One reason I did is that I’m “better” at making 3D art than 2D, even though I was using Inkscape to create SVGs. I enjoy using Blender, I can produce some nice (if simple) assets, and I want to get better at it. Also, the 2D game was top-down, and it was just too limiting to try to make things look good and interesting looking straight down. Now with 3D (really 2.5D), I can have more depth and make things easier to understand. And lots of interesting 3D assets to use, if desired.

Original 2D game:

Current 3D look:

Another reason I switched to 3D is just to experience using 3D. I don’t know if this game will ever amount to anything, so I’m really using it just to learn about a wide variety of things.

That’s a brief overview. That’s all I’ll say for now. What did I do today?

I’m working on a new component called “Owen”. All right, that’s not what it’s really called. That’s just its code name. I just made it up.

This is what it looks like, so far. It feels too bulky, but… get it working first. tweak later.

I have made a point of using TDD to drive at least the complex parts of the game, like the graph and nodes. I’m using GUT in Godot for my unit tests. Today was working test by test to drive the component’s code. As often happens with TDD, suddenly it was just working. And I had the tests as a testbed to show me it working, outside of the complexities of the game. Faster turn around. Faster development. A bit more code in the end, but probably less typing. And I know, at least for the test cases I have so far, it works.

Time to step back from the screen before heading off to bed.

NPC Goals

When I first started using Quest, I had an idea for a game that I called “What Will Be”. It was loosely based on a story I had started writing but never finished, involving a group of people brought together by government forces for (initially) unknown purposes. It was going to be a parser game, and I wanted it to have multiple autonomous NPCs. It was during my attempt to create the infrastructure for this game that I first came up with the idea of “goals”.

A “goal” is conceptually similar to its real life counterpart, though expressed in terms of the game world: a goal is, roughly speaking, a desired world state. Perhaps it’s an NPC wanting to be somewhere. Perhaps it’s a door being opened or an object given. The idea would have to be extended to more internal things as well (e.g. speaking to another character with the goal of conveying information), but I figured I’d get to that once I got the more mundane situations out of the way. Trying to bite off too much at once can lead to either indecision or madness.

I chose some initial goal situations to implement. They were these:

  1. An elevator
  2. NPCs getting from point A to point B in the game world (a three story building, in this case), including riding the elevator and using key cards to enter rooms.
  3. An initial scene where an NPC leads the PC to a meeting.

With respect to number 1, I seem to have this thing for elevators. Perhaps it’s because they have straightforward, well-defined behavior but with multiple parts (e.g. the car itself, buttons, doors, lights). And NPCs moving around and pursuing agendas was something I really wanted as well.

My first stab at code for goals had a form which I realize now was incorrect. I’ll briefly describe it and then get into where that led me, which is to where I am today.

A goal had three main pieces:

  1. a “try” action,
  2. an “achieve” action, and
  3. code to work out whether either of those was possible (Can the goal be achieved? Can the world be changed – can I try – in order to create the conditions where the goal can be achieved?)

If the necessary conditions for a goal existed, then the goal could be achieved. A goal had behavior when the goal was achieved. It might be an NPC transitioning to a new room. It might be some other change in world state.

If the world conditions were not such that the goal could be achieved, then there was code to try to get the world into that state. And the “try” section had conditions as well.

Let’s give an example.

An NPC wishing to enter the elevator would acquire an “enter elevator” goal. The conditions for entering the elevator were that the NPC had to be in the elevator foyer, and the elevator doors had to be open. In that case, with those conditions satisfied, the “achieve” action moved the NPC into the elevator car.

If the doors were not open (but the NPC was in the elevator foyer), the NPC had an action to try to change the world to achieve the goal: pushing the elevator button, if it wasn’t already lit up.

So we have this:

  • achieve condition = NPC in foyer and door open
  • achieve behavior = NPC moves into elevator
  • try condition: NPC in foyer and elevator button not pressed yet
  • try behavior: press elevator button

If the NPC was in the foyer and the button was already pressed, the NPC had nothing to do. It effectively “waited”. Once the elevator showed up and the doors opened, the NPC could achieve its goal by entering the elevator.

The elevator itself had two goals: “close door” and “arrive at floor”. The close door goal’s achieve behavior was to close the elevator doors. The one for the “arrive at floor” goal was to open them. So they were mutually exclusive goals, with mutually exclusive conditions. The “try” action for “close door” was to count down a timer set when the doors had opened. When it reached zero, the doors could be closed. The “try” behavior for the “arrive at floor” goal was to move the elevator to a floor that has been requested by an NPC or PC.

If the elevator doors were closed and no buttons were pressed (either inside or outside the elevator), it did nothing.

The initial “lead player” sequence was a complex mix of path following (both to the player and to the target room) as well some canned dialogue meant to coax the player to follow. There was also a “hold meeting” goal sequence, which was really canned and really unsatisfying to me.

What I found most unworkable about this method of doing goals was the need to manually string them together. For example, any path following (move from A to B) was explicitly programmed. There was nothing in the NPC that decided on a room or worked out how to get there. Plus, I wanted it to be possible to “interrupt” an NPC’s goal chasing. They might be heading to their room, but if you started talking to them, I wanted that goal to be put on hold (if it wasn’t too pressing) to take part in the conversation, with moving toward their room to resume once the conversation was over – unless some other more pressing goal had come up. The key here is that each step along the way in path following needed to be its own goal, to be evaluated and next steps considered at each turn.

To the extent that it worked, it worked nicely. But something wasn’t right with it.

Fast forward to my work with ResponsIF, and I found myself once again trying to implement an elevator. For one thing, I already had done it in Quest, so it was a sort of known quantity. The other was that if I couldn’t implement that, then I probably couldn’t implement much of anything I wanted to do.

Right away, I ran into the same problem I had had before with the Quest “goal” code: I was having to program every little detail and hook everything together. There was no way to connect goals.

After much thought, I had a sort of epiphany. Not only did I realize what needed to be done, I also realized why that original goal code seemed awkward.

First the original code’s flaw: the “try” and “achieve” sections were actually two separate goals! For example, the “enter elevator” goal included not only that goal but the goal that immediately preceded it. In order to enter the elevator (the desired state being the NPC in the elevator), the doors had to be open. But the doors being open is also a world state! And the “try” code was attempting to set that state. Strictly speaking, they should be two separate goals, chained together. I had unconsciously realized their connection, but I had implemented it in the wrong way. And that left me unable to chain anything else together, except in a manual way.

In this case, we have a goal (be inside the elevator) with two world state requirements: the NPC needs to be in the foyer, and the door needs to be open. Each of these is a goal (world state condition) in its own right, with its own requirements. In order for the NPC to be in the foyer, it must move there. In order for the doors to be open, the button must be pressed. I’ll break this down a bit in a followup post, to keep this one from getting too large.

So what needs to be done?

What needs to be done is to connect the “needs” of a goal (or, more specifically, the action that satisfies a goal) with the outputs of other actions. We need to know what world state an action changes. And there is where we run into a problem.

“Needs” in ResponsIF are just expressions that are evaluated against the world state. The game designer writes them in a way that reads naturally (e.g. ‘.needs state=”open”’), but they are strictly functional. They are parsed with the intent of evaluating them. There is no higher level view of them in a semantic sense.

In order to have a true goal-solving system, we need to know 1) what world state will satisfy goals, and 2) what world state other goal actions cause. The goal processing methodology then is, roughly, to find other goals that satisfy the goal in question. Then we iterate or recurse: what conditions do those goals need? Hopefully, by working things back enough, we can find actions that satisfy some of the subgoals which are actually able to be processed.

It’s a bit more complex than that, but the first coding addition needed is clear: we have to be able to hook up the effects of actions with the needs of other actions in a way that the code can do meaningful comparisons and searches and make connections. We need to be able to chain them together. Once we have a way to do that, then the code can do itself what I had been doing by hand before – creating sequences of goals and actions to solve problems and bring to a life a game’s overall design.