Neural Network Development

Welcome! As the title suggests this is about creating a neural network, an adapting AI!

Now I’ve already done most of the research and I think this is possible, I just lack the motivation to work on it. So I’ll post my workings here for you guys to work together and figure this out, I can answer any questions you have but anything in general about AI itself Google it or find a video. Kids these days don’t know what a search bar is. So here’s an explanation about one part of it, Forward Pass.

Taken from my guide I wanted to make on the subject

Neural Networking is a machine version of our human brains. Like our brains, they contain neurons, parts of the code which uses weights and biases to put importance on tasks. These neurons are assigned into layers, called the Input layer, Hidden layer, and Output layer.

Uses
Before I even get into what happens here, what even is the use for neural networks? Like I said, neural networks is a machine version of our brain. This means that we can give the machine values from the game to help it make real time decisions. For example, in a kingdom game where you look after resources such as happiness, money, food, and construction, AI can decide which actions are most important at times where they need to do an action. If we can make advanced AI, it can lead to player vs bot games other than simple sentries. Beside this, AI can learn our personality and use that to stylize the game to our liking. However that form of learning belongs in unsupervised learning, where we are doing supervised.

Beginner Level

  • Input Layer
    This is the start of the system, it is where you input information about certain aspects. For example, if you were making an AI to tell dogs and cats apart, you might give ear and tail lengths for the AI to examine.

  • Hidden Layer
    This is where most of the magic happens. It’s where the math is implemented to calculate a value to be passed on to the next layer. This would be where the AI would grab those tail and ear lengths and make work of it. It would use those input values along with separate weights and biases in each one to make another number.

  • Output Layer
    Just like it’s name, this is where the output goes. After all the math is done, the AI might output a number between 0 and 1. It then compares it’s two values (one output was under the assumption it’s a dog and the other is done under the assumption it’s a cat). It would then see which number is larger and make it’s prediction. [1]

Mediocre Level

So what math exactly happens here?

No math happens in the input layer, so the calculations only begin in the hidden layer. What happens is that each input is multiplied with it’s own weight, and has bias added to it.

weighted_sum = (input_1 * weight_1) + (input_2 * weight_2) + (input_3 * weight_3) + bias

Then after this, we pass it through a sigmoid function which basically condenses the number into a decimal between 0 and 1.

Sigmoid can be thought of as 1/1+e^-x where x is the weighted_sum I wrote in the equation above. This equation doesn’t change, each neuron goes through the same calculation. The only difference is that for future neurons, their input is the sigmoid output from past neurons. An important thing to note is that each neuron starts with a randomized weight and bias that can be between -1 to 1. The reason for this is to allow for learning within the AI, like specialization for more complex actions to help it decide better for it’s course of action.

Now, this is actually it for calculating going forward. The real math will happen for backpropagation, when we have to find our error and update the weights and biases to make the AI smarter in it’s decision making. For now, we can begin with the first part.

If you read that congratulations you now know some of the basics of how this works. Anyways, reason I’m making this topic in the first place anyways is that I had to revise my system at least twice with dramatic changes. So what I’m about to tell you is by far the most memory efficient option at least theoretically.


The Idea

Basically, we want to do the most intense amount of concatenation known to Gimkit kind. The idea is to make what I’d like to call “Hubs” to store all the information for us to withdraw. If you read my explanation above you know what the “Inputs” and “Weights” are. I was thinking of making properties that hold certain number of weights per input and they are separated by certain characters like &[insert weights here]& or something like that so the AI can withdraw them for its calculations. If that sounds confusing, good that’s why I decided to post this. This isn’t for everyone so if you don’t think you can offer anything of use don’t reply and only post something meaningful. Anyways, as one who is bored of this place I won’t be the one to actively work on code unless summoned [2].

That’s most of it, anyone with a question about backpropagation, which if you’re curious I can explain at a later time, just Google it.

Uh, good luck.

By the way, I had some notes written but they’re on a different computer I can’t access right now, so give me a couple more hours and you’ll get some more information on the topic. If you flag this I don’t care just wondering if anyone was curious on this.

Great news got access to some more info so here’s a notepad of my inner thoughts:

gotta figure out the neural network

so concatenation is a big yes if we want this thing to not be memory intensive and actually hold a good deal of information without using too many properties or memory.

Now we need weights all in one property, mind this, weights are for each neuron based on how many inputs it’s receiving. This could be more or less depending on specialization, but I feel this might mess with the system.

I also need to make sure i use only one trigger at least because if we concatenate we can just repeat until all the neurons have calculated.

We also have to condense all the names of neurons into a property. Technically these would never actually exist but would only exist in theory, based on matching which index matches which. For example, character 5 would be the 5th neuron and has these 3 neurons.

If we imagine we have 6 neurons, 2 inputs, 2 hidden, and 2 outputs, then both inputs would go into both hidden but come out as two different numbers since both hidden neurons have different weights and biases. This happens with the output again.

i believe the biggest problem is grabbing the numbers and then changing them again with backpropagation.

Let’s do an example. same example as before.

input 1 = .1 (I might shorten this by saying I for input)

input 2 = .4

Remember they don’t have weights, they just pass their values on.

hidden1weight1 = .2 (might simplify by saying H1W1 where h is hidden and w is weight. This will save space in character limit)
hidden1weight2 = .5

hidden2weight1 = .4 (also considering the fact we’re giving numbers to which neuron and which weight might make it easier to find needed values)
hidden2weight2 = .9

so then we would multiply each each input with each weight, so .1 * .2. I’m not including bias right now since if i solve weight i can incorporate the same system for bias.

anyways then we would get a value, .02. This would be the next input for the output value but you get the point now.

How do we grab the weights though? It’ll be easy enough to grab the neurons, since I’ll label them with numbers I can then use another property for incrementation to see which neuron I’m on. Then some system that after I reach max weights another property resets to 0 and then goes back to counting. since i have 2 weights, we’d go like 1,2 and then reset back to one again when we reach the next neuron. Usually they run in parralel, but I’m not really sure how to do that, and as long as i don’t calculate like a snake but send it to the next layer (which i might add in the neurons’ name to identify like L1H1W1 where L is which layer we’re on, again using a property to keep track)

Apparently i have to save the inputs, so maybe I’d have a different property that keeps track which input belongs to which neuron. Good thing i don’t need it to remember where it goes, so for each input created, I’ll probably save it’s value and place it next to it’s neuron.

following the example again, I’d have them grouped in stages of two, since each neuron has two weights. It does make me wonder though, if i should also place them next to the neuron name, since then I could just find the value. but im not sure, like, how would i separate them? I don’t think I can find the 5th comma, or at least there isn’t a block for that. i could try combining blocks but i doubt id get any good results.

so maybe adding L1H1W1 next to the weights could work. I could force it to grab a certain number of characters ahead of it and then grab that section and place it into a seperate property. Then I could go through it without having to meddle with the other weights. this could work since i would only have to find L[value of Layer]H[value of hidden neuron]W[value of weight]. When i mean value i mean which number or version(?) we’re on. Like 3rd layer 16th neuron with it’s 4th weight.

Now, getting the neuron to figure out what input it needs. I did mention earlier about saving each input created and keeping it next to its creator neuron. So I could subtract 1 in the Layer property and possible iterate through the neurons there. BUT WAIT. how would i figure out when to add another 1 for layers? neurons and weights are easy since they have to added each time, weights added multiply times, so i could do a repeat in a repeat. And then neuron + 1 when i finish the code. But layers go up by 1 at different times. The last layer could have 68628 neurons but the next only 63. Hmm, I could do another property and label or highlight which neurons are the last in a layer, so when the code finishes, it just adds 1 since it knows in the property it will go to the next layer. I’m thinking of also placing the first number in the layer, probably spaced with a ! before or after it so it’ll be different. This might make it easier to iterate through the previous layer since it can just grab the first and last neuron and reiterate through there, at least just for the inputs.

Or perhaps(!) i can just make a property to hold all the inputs created so when we run it’ll grab through those inputs and then replace them with the newly created ones for the next layer. Obviously i’ll still save them with their creator neuron since i still need them for backpropagation.

Furthering on the finding previous layer thing, maybe if i section them in brackets and close them each closed bracket represents a layer? Maybe they could start as but once i iterate i change them to () to signal they’ve already been used and whatnot.

****** Make sure to read future Me.

Ok, quick recap. As of right now I have decided to give specific names to the Neurons, like L1H1W1, or many just L1N1W1 where L is layer (this will be important during not only backpropagation but the forward pass too when future layers need to grab their respective inputs from previous ones). Since all of the properties are gonna be named “Neuron”, there won’t be any distinction between which is hidden or which is output. This is why i suggest we have an if detecting if a neuron has a number that makes it an output. For example, say we have 6 neurons as before, the last two (5 and 6) are the outputs, so we’d check if the neuron we’re on is 5 or above, if so it’s an output and its value should be saved for final consideration.

Here’s what I’m thinking for weights and neurons. Earlier above i mentioned condensing them in one property but didn’t give much thought. The weights should be grouped in a way i know where one neuron weight starts and another one ends. Once again maybe using brackets or curved brackets to group them together. Perhaps with the Neuron they identify with also.

Ex: N1(.1, -.3, .8)N2(.2, .6, -.4)…

But with the negatives it’s an extra letter, so what do i do about it. Extra


  1. I will provide an example soon enough, but since the network is really complex, I need to explain more concepts or terms before I can give you an example. ↩︎

  2. not an excuse to start pinging me ↩︎

17 Likes

how will you have enough memory?

6 Likes

Character limits are exceedingly high, so we can hold a good amount of weights per neuron for preciseness and a good amount of inputs for extra brain power. Not only that we also have a max of about 126 if my memory serves right, plenty of neurons to add if you agree.

Edit: Boss let’s refrain from off topic posts, remember only thing I will allow are post contributing to the topic, anything else will be flagged. Only warning.

11 Likes

Yessss we are totally through this time

3 Likes

We’ve essentially already made an attempt at this concept, but we never really got past weights. It would likely be very memory and property intensive, and hard to make in the first place.

5 Likes

Not if we make hubs and use every trick in the book to reduce block memory. As I said earlier, the idea is to make centralized properties that hold all the information of every node. One for all the weights, one for all the biases, etc. Then we just pull out the needed values through another property that keeps track on what node we’re on.

(Added the notes I was talking about)

7 Likes

Why are you on my lawn?

But yes we’ve already developed a concept of AI, and RNG is the only method that is possible without modding the game.

A traditional weight would require default value editing in-game, which as far as I know, is quite literally imposible/

Also..you should probably read this.

3 Likes

AI’s have a system called backpropagation where they calculate the error and update the biases, I believe the system is very possible to code, so we can get an AI the updates itself over time, at best we might need to inform them what the correct answer is after every run, unless we make a system for that too.


Considering your quotes, they don’t refer correctly to what we’re doing here. I’m not trying to make an AI that can make images or write poems as good as professional artists. The AI we’re trying to accomplish is one that can make gameplay more exciting, such as more intelligent bots that can replace players, making single player games more exciting if you want a multiplayer game. It’s the same idea when games include bots. The incomplete I posted in this topic itself mentioned an example of what the AI can do.

I understand what Cassius is trying to say, but this isn’t like ChatGPT mindlessly making poems for you and making the professionals work obsolete. It’s just an AI that can act as a replacement for a player and still provide some sort of multiplayer vibes. Most games institute that, but do we call them out for “stealing the work of real artists”?

7 Likes

I love the idea of developing AI in Gimkit!

But there’s still one issue that all these AI topics have a fatal flaw in.

The memory.

As a refresher, Gimkit Creative’s memory cap is 100,000.
While making AI it would need thousands of devices and block code.

This alone would need need millions of memory to initiate.
But this isn’t the only thing that AI should be doing.

The main purpose is to reply to what the player is doing.
This would also take millions of memory.

So when talking about AI don’t forget to mention the memory.

3 Likes

then we just lower the quality, no biggie

1 Like

Not exactly.

Just creating the AI will take millions of memory.

Which we don’t definitely have.

2 Likes

Not exactly…
Everyone says AI this and AI that these days, but the truth is, AI is just an evolved form of machine learning.
Neural networks are used in machine learning, which is used in AI.
So, since this discusses making neural networks, not AI, doesn’t that mean we’re aiming for machine learning first? You know, a foundation.
And to that, doesn’t it mean it takes less memory?

Don’t forget, it’s neural networks we’re aiming for. We’re not implementing every single difference between AI and machine learning, and by extension, neural networks.

lookie here

wait @toothless how many input and hidden layers are we aiming for (I’m assuming one output layer)

…why am I thinking bitwise operations might be a great tool

4 Likes

Once again, the hubs and intense concatenation should get rid of this. You could stack hundreds and hundreds of neurons in one property due to the big property limit. So only a couple properties themselves could hold an intense amount. Anyways, I’m not reaching for over a billion neurons like actual neural AI’s have. Just something a little scaled down. Like Boss said,

5 Likes

Uh let’s start small with at least 8, and then about 2 hidden layers with 8 and 5 respectively and then maybe an output layer with 5.

1 Like

Ah. I see.

You hadn’t previously stated what exactly we would be using this neural network for, so I assumed generative AI language model, since that seems to be the only kind of AI people talk about.

I’ve read through the whole original post, and it unfortunately seems I’m a little out of my depth in terms of how you appear to be creating this functionality. I doubt anyone else will really be able to contribute anything of significance either.

However, I do have one idea, based off my interpretation of the last problem presented in your notes.
You could pull out a neuron as a substring when you want to read or change it, just to make things easier. You’d start by taking the substring from the neuron’s name to the end of the property, then chop everything off from the close parenthese to the end of the substring. This would allow you to pull out individual weights from the neuron by labeling each one with a letter like so:

N1(A.1B-.3C.8)N2(A.2B.6C-.4)

I’m unsure if this makes sense or if it would help you at all, as I’m kinda cloudy on how this works. Let me know of your thoughts.

5 Likes

I’ve actually somewhat recently made a (very small) neural network in GKC, only thing is that i haven’t implemented a learning system. (I tried to implement one, but it was a bad and poorly thought out idea compared to back-propagation, and it didn’t work. Even though it took me a while to make it, I won’t be including anything about it since it sucked.)

The network has:
2 inputs (there’s also a “key”, which doesn’t change the output, rather it’s used for finding the cost)
one hidden layer with 2 nodes, and one output value
(like I said, very small network)
every node has weights and a bias, which are implemented as individual properties

the goal of the network is to solve the “XOR problem”, which goes like this:
if both inputs are 0, output should be 0
if one input is 1 but the other is 0, output should be 1
if both inputs are 1, output should be 0
(currently it can’t learn to do this, but theoretically it should be able to do it if you set the properties to the right numbers. I haven’t actually tried this yet, though.


these are the properties

to explain what they mean, if one is called weight X-Y, X refers to the node “number” (node 1 is the first in hidden layer, node 2 is second in hidden layer, node 3 is output), and Y refers to which input the weight is multiplied by (eg. weight 1-1 means it’s a weight that is multiplied by input1, and its for hidden node 1.)
in other terms, the first number is to tell what node it is, the second number is for what connection the weight is for
(sorry if this explanation wasn’t great, i might come back and try making a better one later)

the biases follow a similar numbering system(bias1 is for a hidden layer node, bias2 is for a hidden layer node, node 3 is for the output)

the triggers on the left are used for setting the inputs and key

technically, none of the output properties except for net_output are needed, but they’re useful to see what’s going on in the hidden layer nodes
net_output is just the final output of the network, calculated based on the weights and biases

the block code for getting the net_output is too wide to put in a picture, so i’ve typed it here:

set input1 to GetProperty"input1"

set input2 to GetProperty"input2"

set output1 to GetProperty"bias1" + (input1 x GetProperty"weight1-1") + (input2 x GetProperty"weight1-2") 

Set Property"output1" 
value output1

set output2 to GetProperty"bias2" + (input1 x GetProperty"weight2-1") + (input2 x GetProperty"weight2-2")
 
Set Property"output2"
value output2

Set Property"net_output"
value GetProperty"bias3" + (output1 x GetProperty"weight3-1") + (output2 x GetProperty"weight3-2")

i also made code to find the cost, but it isn’t at all complex
(literally just cost = net_output - key)

This network implementation is obviously not great, it uses 13 properties for a network with 2 hidden nodes.(I didn’t count the output properties except for net_output, since its the only really needed one) I’m sure there could be a more efficient way, perhaps using the substring idea some_kid talked about. The caveat of this is that it would add block code complexity, which could get expensive fast.

With my current system, you can’t go super far. You could make, say, a network with 5 inputs, 2 hidden layers (one with 8 nodes, the other with 5), and 4 outputs. But if you increase any of those numbers by one, you would go over the property limit. (There’s tons of combinations that fit under the property limit, this one is just an example.)

(you can skip this paragraph here, I just added it for anyone who’s curious)
(If you want to know how I calculated this, I made a function in desmos scientific: f(i,h,j,o)=1+i+h(i+1)+j(h+1)+o(j+1)+o where i = inputs, h = hidden layer one nodes, j = hidden layer two nodes, o = outputs. unlike the network i actually made, this includes variables for a second hidden layer. The reason this function works is because each node adds more weights based on how many the previous layer, plus one for the bias.)

This might be enough nodes to be practically useful, but it would also be really tedious and difficult to construct, and you would need a lot more block code, which would take tons of memory, alongside taking up nearly every property. (frankly, i dont even know if a network of this size would be possible to construct in practice with this method)

If anyone wants, I can post a video of the network calculating outputs, to show that it really works.

8 Likes

How much memory was that? I think we can work off of this here

1 Like

I do agree the multiple properties per node destroys the whole idea, although I do think with a bit of ingenuity we can work around the block issue. My first iteration of the neural network was able to work under 75 blocks, although the calculation had to be run for every trigger, which added 540 memory for each new node. I could upload my old code so you guys could use it to help with further development.

8 Likes

I have an idea. A while ago, Blackhole made Brainduck in GKC. If we can recreate that (or, preferably an easier turing-complete language), the problem moves from being a Gimkit Creative problem to being a, for example, Python problem, which will probably be much easier to figure out.
Once I get a chance, I’ll try to see how hard making a Python interpreter is.[1]


  1. Also, happy new year, Gimkit forums! ↩︎

6 Likes

Could we cross the properties over? Surely we can use the same property for multiple parts of the actual AI functions. The idea that we use a certain property in one part of the bit the player would interact with in some way and then take part of the same function and use those property values later on as long as we resetted to the original before we started over again on a new cycle . Of course this wouldn’t work with some functions that are used in the crossover between a certain input and output but anything other than the ‘bit in between’ could be stored and then reused again in a later function. This would save plenty of memory over time. Thankfully, within the current boundaries, we can duplicate and store a number of property in blocks and extract which therefore gives us the advantage of this system working. I estimate (don’t trust me too much) that this could save over 5000 memory within our current system ideas. [1] If I had more time rn I would test this and I might try tomorrow. This is a hypothesis as of now but I believe a Duplicate, Store, Extract system would work. Please correct me otherwise.


  1. This is all under the assumption that wahoo doesn’t come up with something that changes everything as converting to python would save everything on another level. ↩︎

4 Likes