• Member Since 27th Feb, 2012
  • offline last seen 7 hours ago

Iceman


Satisfies Values Through Friendship and Ponies.

Nov
27th
2016

3,000! · 11:32pm Nov 27th, 2016

Friendship is Optimal just broke 3,000 upvotes. I'd like to thank each and every one of you who enjoyed my story. I never imagined that it would ever be this popular or spread as far as it has.

Thank you for making my numbers go up!

Report Iceman · 824 views · Story: Friendship is Optimal ·

Latest Stories
1

Tip Jar

Comments ( 57 )
  • Viewing 53 - 57 of 57

2942372
That has to be intentional. It just can't be accidental.

So they just premiered a show called "next" on FOX tonight and it's about trying to contain a runaway super intelligent self improving AI. Some FBI woman gets tangled up with a guy who worked on the AI and the guy says to her "It's the holy grail [of AI]" and that "Google, IBM, the Chinese, hell, even hasbro would like to get their hands on this"

Man I wish that was an intended reference.

2901379
Just tried TabNine out. It's pretty damn cool ngl. Just made some simple stuff with it and it's been working most of the time if I give it comments (thought I am using the free version so I don't know how much that contributes). Also earlier this month Facebook put out a pretty interesting thing, though I'm guessing you keep up with this stuff 😂

2900278
It doesn't invalidate concerns about the long run trends...but if you are talking about this Microsoft research video, I'd urge a bit of skepticism at how practical that is, at least for now. For every successful use of GPT-2, there's about ten completely bizarre misfires. The one time it works is what goes in the video or in the blogpost, and I'm just as guilty of cherry-picking its results as everyone else.

What's much more near-term practical and exciting to me is feeding GPT-2 results into a programmer's existing text editor. Seriously, look at these demos for TabNine. This is a product you can buy today! Instead of writing the full function for you, it'll suggest the next n tokens, weighted by probability, and feed those suggestions into the existing autocomplete system.

I remember reading FiO a long time ago before I understood what a computer even was or did. Coming back to it nowadays, it really opened up my eyes about just what the future could look like (to an obvious extent), more so than any Robert Miles video ever did. With the advent of GPT-3 and its insane ability to create text indistinguishable from stuff written by people while also being able to semi-easily identify features in things it has little to no training data on, I've been freaking out about the future. I've seen a few videos of people who trained GPT-2 on python GitHub repositories. They could give it a function name and a comment on what the function would do, and the model would spit out the code required to do it. A machine was preforming more thoughtful and intelligent coding than a large number of people in a small fraction of the time. Given that within a year of GPT-2 releasing, OpenAI has pumped out an even better and larger model AND had Microsoft build them their own world-class supercomputer, I doubt it'll be many decades before we reach that final breakthrough. Just want to know your thoughts on the matter and all that Jazz

  • Viewing 53 - 57 of 57
Login or register to comment
Join our Patreon to remove these adverts!