• Member Since 7th Apr, 2013
  • offline last seen Jan 15th, 2019

ArtichokeLust


Wow. Visiting this site again was like going back to my old neopets page. So much nostalgia.

More Blog Posts35

  • 385 weeks
    Where art thou, oh Artichoke?

    Sorry, I'm a bit busy. I had two hobbies before I got a job. Writing here, and developing AI. I hit a wall here with the new changeling episode, but in my AI development, I found a highway.

    Look at my AI! My AI is amazing!


    [link if image doesn't work]

    Read More

    0 comments · 475 views
  • 396 weeks
    The New Hive Super Edit

    Just letting you guys know I nearly redid the entire last chapter because of a few complaints I got. (In case re-publishing didn't work.) They were definitely valid, even if I didn't agree 100% with the reasoning behind them, and now I think my story's much better because of the changes I made.

    Keep up the constructive criticism!

    0 comments · 487 views
  • 398 weeks
    Testing testers

    Hello guys. Here's part of what I've been doing while I've been rewriting my main story:

    [webm]

    Read More

    0 comments · 361 views
  • 403 weeks
    Would you like a collab?

    I've actually been thinking of adding some collaboration into 'The New Hive' since its inception. I can't really create a realistic representation of humanity all by myself after all.

    Read More

    29 comments · 708 views
  • 404 weeks
    Waiting for editors to compile...

    So, my next chapter's done. Just need to wait for editors to do their thing. And from what I can see from the first editor, I made quite a few mistakes.

    Well, now I have guilt-free free time... What should I do... Hmm...

    Read More

    4 comments · 443 views
Aug
13th
2016

Waiting for editors to compile... · 6:32pm Aug 13th, 2016

So, my next chapter's done. Just need to wait for editors to do their thing. And from what I can see from the first editor, I made quite a few mistakes.

Well, now I have guilt-free free time... What should I do... Hmm...


Weeeeee!

Just kidding.

Actually, I've been working on a few other things. Lemme show you real quick.


Did you catch all that?

Right now, I'm trying to create some sort of neural network. I've looked into HTM, and it looks really, really good, but for the things I want to do with it, I need to optimize it. Right now it can find patterns on graphs, which is useful for many things, and it can do some video prediction in the experimental code, but I have no idea how that works. Plus, I need to really understand the algorithm, because I'm using it for more than just prediction.

So, for now, to learn OpenCL, I'm going to be constructing a standard 'neural network' genetic algorithm in OpenCL. That sounds complicated, but really all I'm doing is starting with a very small set of vertices, directed edges between those vertices, vertices activation requirements, and edge weights, and adding onto it or removing stuff from it depending on whether it does well or not.

The thing is, I want to take full advantage of OpenCL's GPU, CPU, and FPGA optimizations, starting with GPUs. In OpenCL, there's private, local, and global memory. Global memory can be accessed by all the work groups, local memory can be accessed by all the work items in a work group, and private memory can be accessed by one work item only. Ideally, I should try to keep as much as possible in private or local memory, and only make as few calls as I need to to global memory... well, based on what I've read about CPU caches. I haven't read the OpenCL best practices booklet yet.

So, there are a few nuances to just that. Global memory is limited, and it doesn't make sense to just copy global memory into local memory, as that would take nearly as much time as accessing global memory several times. No, what I have to do is define a set of small private neural networks that will be run depending on which identifier the local array gives, and I need to define a set of local neural network groups that will be run depending on which identifier the global array gives. These neural networks and clusters and super-clusters will need to be defined based on what the genetic algorithm finds to be useful. So, I'll either have to identify function groups myself, or I'll need to write an algorithm that finds regions of graphs with high connectivity separated by regions of graphs with low connectivity, and I'll need to modify that algorithm so it optimizes the groups for OpenCL, however that will work.

All of those groups will need to be accessed as quickly as possible. I'm not sure if I should use linked graphs or hash maps for that. It depends on how OpenCL/GPUs work...

Additionally, I need to have the genetic algorithm add/remove neurons/edges, add/subtract to edge/activation weights, and add/remove clusters. And I'll need to determine whether a network is functioning better or not. Right now I'm going to do that by determining if a robot arm meets a required (possibly looping) sequence of positions and velocities as fast, smoothly, gently, etc. as possible.

By the way, this isn't really an advanced neural network, even though it will eventually generate algorithms like PID loops, least jerk motion, etc. These are meant for interpolation or really fast reactions. With this, when an HTM neural network tells this neural network to move the arm holding the needle into the exact position to inject the activating chemicals into the correct cells, and not the ones surrounding them, then remove the needle without disturbing the cells, and move the needle to its next spot without stabbing anything on the way, it should do it without making any mistakes.



After writing that out, I now really understand how much I need to learn about OpenCL. It's time to go on a long walk with my cell phone and a couple of 'OpenCL best practices' PDFs.

Comments ( 4 )

Is this for IRL or for your fanfiction simulation or both?

4148186

IRL.

I just went on a 3 hour walk while reading about OpenCL optimization and thinking about how it applies to neural networks. I decided to use K-ary trees instead of hash maps or pointers to represent neuron to neuron directed connections. If the number of neurons generated is n, then roof(log_2(n)) will be the minimum number of bits I will use to access the neurons. If a neuron has more than k connections, it will point to a location greater than the number of neurons, which will represent a neuron skipping the activation step and spreading k more connections at most. Now I'm going to write some code based on what I've found.

Comment posted by mastermenthe deleted Aug 14th, 2016

Aww, now I want to know what you said...

Login or register to comment