Decentral Lens

Decentral Lens: Erik Voorhees on AI, Privacy and Censorship

Decentral Lens Season 1 Episode 50

Venice.ai founder Erik Voorhees joins Decentral Lens for an in-depth conversation on the convergence of Bitcoin, crypto, and AI. The show begins with a brief look at Erik's path to founding Venice, tracing his roots in Bitcoin and crypto.

The discussion shifts focus to the parallels between the development of Bitcoin and AI, emphasizing the significance of decentralized artificial intelligence platforms. It delves into the risks associated with centralized AI companies storing user communications, compromising privacy, and potentially injecting bias into their systems.

The conversation highlights the impact of political forces on the election cycle, using this as a talking point to illustrate how prompt outputs can be influenced. The example of varied responses from different AI platforms to "Donald Trump holding an ice cream cone" is used to demonstrate this phenomenon.

Venice.ai is explored as an open-source solution prioritizing privacy and lack of censorship. This leads to questions about the long-term security of frontier open-source AI models like Llama from Meta, and the need for measures to protect these models from potential shutdowns. The conversation delves deeper into Venice's customization options and utilization of the latest open-sourced models, offering a glimpse into how users can tailor their AI experience to suit their preferences and values.

The Decentralized Era is just beginning. Come join us on the Socials:

X | @DecentralLens | @Blutoshi | @DiscoHODL
YouTube: @DecentralLens
Web: https://decentralpod.com/

Speaker 1:

Hey everybody, welcome to Decentralenz. I am host Blu Toshi, and this is Disco, and we are super excited. We have a very special guest today and someone who I cut my teeth on in the crypto space since my days in 2015. He's been around way longer. I'd like to welcome to the show Eric Voorhees. Eric, thank you for coming and welcome. We are so excited to have you here.

Speaker 2:

Glad to be here. Thanks, guys.

Speaker 1:

Awesome, awesome. So today we're going to talk AI, but how AI ties into crypto. Eric, obviously you're one of the founders of dot AI, which the second I heard about it, I think, on um McCormack show. Like the second, I did it, I went in, I used it. I'm like this is fucking awesome, like it's and it's private, it's secure and you know there's no tracking. Uh, and I immediately switched. I got rid of open AI and even got rid of um mid journey for images, because you know, I, I know with open source it's a work in progress, but, um, I feel like Venice. Just, you guys hit the ground running and and I love your product. So we reached out to um Tiana, your COO, who was just awesome, gave me some tips on how to better use it and I'm just blown away and I love using it. So having you on the show is just an absolute pleasure. So thank you.

Speaker 2:

Sure sure.

Speaker 1:

So for those who don't know Eric and that's probably a very few of you I probably have to introduce myself more than I do him, but you founded Satoshi Dice back in the day myself more than I do him, but you founded Satoshi Dice back in the day. You did ShapeShift. You founded that. But I think, as huge as those are, I think some of your biggest contributions to the space honestly are your patience and how you've educated people in the Bitcoin space about self-sovereignty, about your right for privacy obviously big on the Fourth and First Amendments, and you know I've learned a ton from it as I've gone deep down so many rabbit holes. So I appreciate having you here.

Speaker 1:

Having you here the Peter Schiff debates I know we're talking on the pre-show, but those debates were great because you're very eloquent at saying what a lot of us think and know in our hearts that we've been doing this so long. But you're so good at being eloquent and patient with people when even sometimes you're getting attacked with the SBF interviews, so I appreciate you in doing that. I guess my first question for you would be you've done so much in the space, you don't need an introduction. What are you most proud of and what would you want people you know before we get too much into AI on kind of the Bitcoin side in your evolution. What are you most proud of and what you do? How would you introduce yourself?

Speaker 2:

Well, I got into this crypto world, uh, for very explicitly ideological reasons, and, um, I think what I'm most proud of seeing over these like I don't know 14 years or so, is that, uh, it's all like working, like it's it's actually growing and uh taking over the world, and to see something that you get interested in for reasons of principle, um, but then they start changing the world and you get to be there along for that ride and like help contribute in little ways to it, is very rewarding, uh, and I I think, um, a lot of people have to like decide in their life whether they want to spend their time doing something that is deeply meaningful to them or something which, um, maybe they can make a lot of money at, or have like a big impact on, or or like have to focus on a career in some way, and in the crypto world, to be able to do both of those things in in one timeline is, uh is very special.

Speaker 1:

So I'm just glad that this stuff is working and I think, think the energy that's building from it and the consequences from it are going to be very momentous some of the people who have been the most eloquent at at saying why so many of these aspects privacy, self-sovereignty are so important, and a lot of them, eventually I feel like they they, you know they're public facing, they get in front of the camera, they do social media and stuff and they they kind of speak their piece. But then you know folks like Trace and Andrea, some of these people that a lot of us learn from eventually kind of go back and then a new generation comes. But you've just kept evolving in this space and kept going, and there's obviously natural ties to AI and crypto. But what's kept you now that you're saying like, well, we've kind of made it now and I know that's only certain aspects, we have a long way to go, probably in others, especially with AI, but what keeps you going and what keeps you motivated going in this space?

Speaker 2:

The motivation comes from feeling like I am fighting a worthy fight.

Speaker 2:

Fighting a worthy fight, um, and a worthy fight means, you know, like a a strong adversary and um requiring sacrifice and bravery and ultimately like seeing victory through those efforts.

Speaker 2:

Uh, I, I really think that um fiat currency and like the the which emerges out of it at the behest of the state, is really close to the root of many of the world's problems. Certainly, not all the problems in the world are caused by fiat, but I think it contributes to many of them and it's a very undiagnosed problem. So, to feel like I understand that problem and to like be able to hack at that root and teach others about the importance of it, and then to now see, you know, like millions of people hacking at that root is uh is really fulfilling and um, certainly, as the industry grows and expands, it just becomes more and more opportunity. You know like it's so much more diverse now and complicated and I feel, uh, I feel inspired every day just by the dynamism that's going on. It's never gotten like more dull or boring. It's always getting more wild and crazy and surreal.

Speaker 1:

Absolutely. Do you? Do you you know, I hate to put like a finish line somewhere Like, do you think we're there or is there always a next fight to fight, you know? And the next thing, we're citizens, and Bitcoin citizens and really global citizens stand up to governments and oppression and freedom and privacy. Like, do you think we've made a ton of progress and do we have a long way to go?

Speaker 2:

Yeah, both are true. We've made a ton of progress and we have a long way to go. Like we're not. We're nowhere close to the final battles at all. Um, most of the you know, quote, unquote serious financial professional world, uh, doesn't think of Bitcoin as anything more than a speculative instrument, and even those who seem a little more understanding of it right Like the Larry Finks of the world like it's a product for them to make money off of, I don't think he has any particular care about it as an asset class or what it actually means, and I don't think people like that actually believe the consequence that inspires the ideologues, which is we're not doing this so that we have like a fun new instrument to gamble on. We're doing this because we're actually trying to change how money for the entire planet works at the foundational level foundational level. I really don't think many people in professional finance understand that mission or give it any credibility, and so that ignorance is still our advantage.

Speaker 1:

Yeah, absolutely. It seems like you see that, first and foremost, whenever markets even go down a little, it's funny. We and our whole audience obviously sees Bitcoin as the run to safety base asset, but we're now competing against Wall Street and these big whales who see it as the end of the tail risk on assets. So there's this I always tell Disco it's great because you know when to buy the dip, because the dip is so obvious. It's like telegraphed, because there's so much more money that comes in. And then the folks who believe get in there. Um, like, do you, do you think Bitcoin before we transitioned to AI? Because there's so many similar? I feel like there's so many parallel fights in there. Do you think Bitcoin has won the fight or now is it? Yeah, bitcoin's winning, you can't stop it. But now we citizenry have to make sure we're not taxed out of our profits and our holdings and stuff like that.

Speaker 2:

I don't know. I think the battle will get interesting and more scary as soon as the next financial crisis happens. Uh, and you know, that could be in six months or in five years or 10 years, uh, but that's really going to be where the, the game theory of this plays out. Um, there's. The world hasn't gone through like a global financial apocalypse, currency crisis among major currencies, when a credibly neutral, decentralized alternative has existed, and in its early days, bitcoin wasn't ready for that. Arguably, it's not ready today, I don't know. Uh, but it will get there eventually, and so, um, everyone building the tools and systems for this technology, like. Hopefully, they realize that they're kind of in a race against time, because the uh, the time for crypto to like really shine will be, um, will happen at a at a moment that is not of our choosing.

Speaker 1:

Let's just say it. Yeah, it's almost like you're saying we haven't had our 2008 was our dress rehearsal, but the market was so manipulated that they bailed out the fiat banks and kind of the cabal there. It sounds like you're saying until we have, when they've run out of even those bag of tricks, when we have our fourth turning, like that's when, like when the rubber hits the road, like is bitcoin going to hold up? Which I think it will. I think it'll go down. A lot like one of my proudest moments was, uh, 2020 march, when covid all the panic first happened and bitcoin, I think, for like six hours, went below 5 000 in that little dip and I bought a couple. I was like, you know, when you have that conviction, you just know. You're like why is everyone panicking? Like am I the only one who sees this? Like it's not going to zero. We're going to keep buying until it goes down. But no, that's super cool. So let's get into AI and kind of transition.

Speaker 1:

There there's so many parallels. You know, in Bitcoin we say you know, I don't know who originated this, it's probably you or a close friend of yours, but you know, not your keys, not your crypto. I don't know if that was Andreas or someone, what would be, you know, now that I'm into Venice, to me it seems so obvious for the group of people that we run with. Obviously we're heavily in the Ordinals rune space, but we're obviously heavy in the Bitcoin space as well. Have you thought of a I hate to say catchphrase, but some way to simply, in one sentence, capture the minds of fellow Bitcoiners and crypto holders? The, the ai equivalent of not your keys, not your crypto, like not your privacy or not open source and not, you know, not surveillance.

Speaker 2:

You dipshit, you know well, no, I don't, I don't have a good uh slogan like that yet. Uh for the ai world, but but we need one. The not your keys, not your coins is such a such a great one and and still, even though that statement is so pithy and uh and you know, clear uh, most people, of course, don't hold the crypto on their own keys, but the goal is is I don't think that everyone in the world uh has everything in self-custody. That would be inefficient and impractical. What's important is that people understand the two different paradigms and move between them as appropriate, and most people are going to have some kind of mix of self-custody and third party, but it's that mix that's important, and it's the ability to leave, to exit the custodians when they misbehave that is so important. If custodians were all highly trustworthy, the need to store your own keys is less, but if they start abusing that privilege for any number of reasons, the ability to opt out and exit is the ultimate check on that. That's what makes crypto so powerful. And yeah, I mean in the AI world.

Speaker 2:

I think the issue is that people aren't looking into the future very far. People have started becoming fascinated by these chat bots, which are amazing new tools of entertainment and productivity and and research and create, you know, creative thinking. Um, truly amazing new new tools with this stuff. But the conversations that people have are all being tracked and all being centralized and it does not take a very creative mind to consider that in the near future, all those conversations are being analyzed for various forms of wrong think. Right, imagine if ChatGPbt had existed back during covid.

Speaker 2:

All sorts of people were asking chat gbt about information about, you know, the vaccine and masks and everything. Um, chat gbt would have absolutely, uh, censored the answers, some of which would be overt, some of which would be a little more subtle. But all the people who expressed, let's say, skepticism about the safety of the vaccine, just expressed skepticism. All of those people are tracked in the, you know, open ai database. So a government that wants to know, like all the people who aren't in line with the vaccination program, all they have to do is send a subpoena and they will get the records of everyone and their conversation history. And um, it's just so clearly dangerous for that kind of thing to go on in the world and, uh, there has to be alternatives. Um, there has to be, you know, the self-custody or an AI alternative that allows people the sovereignty over their own information and content.

Speaker 3:

In a way, it's almost like self-custody of your own thoughts, and that's what really gets me just, you know, hyper alert, obviously on crypto, and I think you're early on this, and I think this is the most important rallying cry, and I agree with Blue It'd be great to come up with something that can push it out there, and I do think it's something about your thoughts. You need to own your thoughts, and that's what really blows my mind to think about where that could head your thoughts and that's what really kind of blows my mind to think about where that could head.

Speaker 1:

Well, it's like pre-crime, you know, and I'm amazed how, I think, being in the crypto space, we've learned so much about how money works but also, you know, transforming to AI. We realize how manipulative and if, if mainstream media or whatever term you want to use repeat something over and over and just kind of like slowly fades out and gaslights, alternate opinions, views, which often, you know, are even people on the streets, like here's what's actually happening that gets filtered out. I'm amazed at how quickly that we saw in COVID how people can kind of group think into one area and, eric, it sounds like you're saying an AI is almost the capstone to keep that control. You know TV, you know when we all grew up it was three channels. It was super easy to control the news and the narrative and then it went to cable and then maybe it got a little harder.

Speaker 1:

But you know, I'm sure they've got a three letter agency person at pretty much every channel. They're approving and denying certain critical things. But now the internet they had to learn to to rally and and and corral that and, and they certainly have. I mean, look at the elections and stuff and now you just wonder, like it's so hard to find the right or it's so hard to get information, trying to distill it all through and figure out what's what's truth and not it's I mean it's? It's frankly exhausting.

Speaker 2:

Yeah, yeah, and it's. It's bad enough that, like, as the Internet emerged, the main search engine being google, is this centralized thing. And google did such a good job as a, as a business, of like seizing other verticals and they got gmail and they got youtube and all these different properties to the degree that, like google as an, as a company, can control what is perceived as true in society to a large degree. And we should assume that these AI providers have the potential to get far larger than a Google, because what they're doing is is much more powerful and has just gotten started. And so let's not make that same mistake again.

Speaker 2:

And I think, especially when you consider that we're talking about AIs, it's not just that, like, your conversations would be held in track so that a human wanting to spy on you could go look at it. You have to combine it with the risk of the AI itself analyzing the things that are happening within its system. Even the people who are, like, very scared that AI will become sentient and try to wipe out humanity, you know, those people should definitely not want these like closed central repositories because, yeah, if, if, chat, gpt became sentient and has all this information about how, the, you know how 500 million people have used it. What do you think it needs to do in order to like mess with those people's lives?

Speaker 1:

It already has all the information that it needs and you don't need any kind of crazy dystopian sci-fi scenario for the power to get abused in these situations the one where everyone was like that was kind of a holy shit moment, right, and that was last year and I feel like, with the election coming up this year, it was probably right around the time, uh, that I signed up for Venice like open AI. Maybe it was when it went to four, or chat GPT went to four or whatever the one was now the amount of, uh, censorship, it's almost like it happened overnight, and I think I heard, eric, you made a great point somewhere else that it starts fine, but they can switch those controls on a dime and we're already seeing that.

Speaker 2:

Well, and it's not just the nefarious actor who wants to control information like those people obviously exist and that's very dangerous, um, but there's also the much more benign kind of situations where, like, you want to avoid offending some certain group, so you type in a set of rules because you are trying to avoid offending a certain group.

Speaker 2:

So you type in a set of rules because you are trying to avoid offending a certain group, right, and maybe you succeed in that or not, but those rules then cause cascading changes elsewhere, in other types of answers, some of which may be easier or harder to spot.

Speaker 2:

But you, you, you end up getting like an AI system, which is not just the machine intelligence of a statistical language calculator, but it's. It's that going through this weird, unknown amalgam of human bias and censorship and restriction, and you, you prevent people from like having a true understanding of reality. They see the world through the this, like mixed glass of all the people who wanted their influence on the outcome. And maybe all of those people had good intentions, right, we don't need to assume any of those people were evil, but if you care about the honest and accurate and objective pursuit of truth, you have to be able to know the tools that you're using, and that's why, like open source, ai, I think, is so critical, because these are very powerful systems and if it's a black box to everyone in the world except for a few people at a centralized company and the government agents that oversee them, um interest.

Speaker 3:

Yeah, it's, I've been. I've been learning and struggling through this notion of out of the elements of spying or censorship. What's more important, I know one factor that ties to the whole meaning or need to really explore decentralized options.

Speaker 2:

Yeah, and like to drive this point home very easily and clearly. So, at least in the US, you know like many people are either vehemently opposed to Trump or vehemently opposed to Biden, slash Harris. If you asked either of those people you know post the election, if your enemy is the one who is running the government, do you want that enemy to be able to influence or control the flow of information in society? Right Like ask any Democrat if they want Trump to have the power of controlling the flow of information in society. Right Like ask any Democrat if they want Trump to have the power of controlling the flow of information in society. And they would say immediately of course not. That's horribly dangerous. And you could ask any Republican the same of the Biden-Harris administration and you'd get the same answer.

Speaker 2:

And so the point here is that if you care about civilization being healthy, you do not want a situation in which any central party can control something as important as the flow of information and knowledge. And it's that kind of philosophy that puts this, I think, very close to the crypto world, where those of us in crypto are like yes, we don't want any central party in the world having control over something so important as people's value and trade and exchange. That's like too core to humans to ever let anyone control it and it's sad that we have permitted this, like central banking, fiat centralization to occur. Let's fight that centralization to occur. Let's fight that. Now that AI has emerged, we should be really careful not to permit the same kind of dangerous centralization to exist there.

Speaker 3:

And even with some of the groups that are put together now, like they're not putting you know. It doesn't seem like they're putting advocates of centralization in the conversation. It feels like it's almost a you know copy and paste of the web to people that were kind of ahead of it with social media, ahead of the government on it and kind of set the rules. So it's almost feels like we need to get really proactive to avoid the same trap happening.

Speaker 2:

Yeah, yeah, we can't assume the centralized AI companies will be handling this the fact that OpenAI added an ex-NSA board member to their board or an ex-NSA director to their board and then they like bragged about it as if that was a win Right. So they don't even see that as a as a as a bad call or a dangerous thing. They saw that as like, oh, they're taking national security safety seriously. So, yeah, it's a dangerous and confusing time for this tough, and with Venice, we're just trying to provide an alternative that can become increasingly decentralized and thus never captured by any central party.

Speaker 1:

Yeah, it's to your point, Eric and Dis and disco Elon had a great tweet the other day. He was like when you watch star Wars, you were for the resistance. When you watch hunger games you're for the resistance.

Speaker 1:

When you watch Dune, you're for the resistance. And then he's like, why don't you see what's going on? Like in real life, you're not for the resistance, and it's just like folks wake up. Um, eric, what would you say right now? Um are the most stark examples of ai already starting to kind of, as disco said, like kind of control your mind in this thing.

Speaker 1:

I mean, as you're talking, I'm just going to show this is more comical than anything, but I typed in open AI a little while ago, um for um, a picture of Donald Trump, uh, with an ice cream cone holding an ice cream cone. That's behind Eric here. I could probably scroll and you could probably see it. But, um, I got here are sketches of a man with a distinctive blonde hair holding an ice cream cone, because that's exactly what I wanted some handsome, kooky, 28 year old with a scruffy beard holding an ice cream cone.

Speaker 1:

So obviously this is a very light-hearted example of of the censorship and, like I said, I think because of the election, like had I typed this in in maybe march or april, I would have gotten donald trump holding an ice cream cone. It literally one day was just on claude and on, uh, open ai, it was just like gone and they were all the the closed source platforms having pictures of politicians, especially, uh, the elected ones. But I don't think I could do Elizabeth Warren, I couldn't do any of the senators doing just random benign things. The clampdown privately was everywhere. But what are the biggest ones that you've kind of seen?

Speaker 2:

That example is the one I always lead with, not that ice cream in particular.

Speaker 2:

Trump holding an ice cream cone always lead with not not that ice cream, in particular, trump holding an ice cream cone, yeah, but you can go to. You can go to the leading ai image generators uh, you know chat, gpt or or mid journey and just ask a very benign thing like picture of donald trump. Um, there's nothing wrong with that question, right, like as a, as a pursuit of intellectual inquiry or aesthetic desire, like there's. There's nothing about that that anyone should have a problem with. Why in the world can't you generate these images from the uh, from the ai? Um, and it's because there are people right, it's not like the ai can't do it.

Speaker 2:

There's people that have said do not create pictures of politicians, and if that doesn't chill you a little bit, then you really aren't thinking very much about, like, how the future will unfold. You don't want topics. A society that removes topics from conversation that you can't even discuss. It is a society that ends up in very bad places, and so that example is a really good one. Another one, like after the assassination attempt on Trump. I don't know if it's still true, but for at least a week after that happened, if you asked Claude from Anthropic or ChatGPT if Trump was assassinated, it would like tell me about the assassination attempt. It would give you this very vague answer like oh, we don't really have information on that. You know, please consult your local or your national, whatever for the source. And obviously it can. It would know about those things. It just was told not to comment on it because it was a controversial topic.

Speaker 2:

Um, and as people move to ai instead of search which I do myself like, I've started using ai instead of search for many things because it's just faster and you don't get all the ads and everything it becomes. That's where people search for what's happening in the world, and when entire topics just like get removed, yeah, you know, and it can happen about an event that's going on. Now, you can imagine, you know some kind of? Well, you can imagine an AI know some kind of? Uh, well, you can imagine an AI system in China is certainly not going to tell you anything about Tiananmen square, right, like? That's an obvious one, um, and you, you, so, anyway, the problem has been explained enough here, I think. But, um, that's a scary, it's scary to me and thankfully there are solutions, right, like open source, decentralized providers are the solution. They work now and they are in the realm of the same quality in terms of image and text models as the closed source alternatives.

Speaker 1:

How do you, as we transition to Veniceai as a solution to this that I'm using and just loving, how do you make sure that you don't get what is it government when they take you down, is that not a DDoS attack? But when they basically seize your website? How are you handling the decentralization of AI to make sure that it cannot be stopped?

Speaker 2:

The most great question, the most important thing that needs to continue in order for AI to not become fully centralized, is the predominance of open source models. The models are really like the key thing. If you have a world where frontier open source models exist, you have a healthy AI world and basically everything else flows from that. You don't need to be too worried about the state of AI if there are at least frontier open source models. The leading one today is, of course, lama 3.1 from Meta.

Speaker 2:

That's the biggest thing, and if you have that, then the model itself can be examined.

Speaker 2:

You can see what the weights are, you can understand how it was trained and what biases exist in it and what data sets it's using, and you can understand these things and you can change it, you can tweak it, you can make versions of it. It's adaptable and no one's controlling that. Once Lama 3.1 was released, it goes in all sorts of different directions and people change it and improve it or make it worse in their own variations and colors, and that is great decentralization. So that's that's sort of the most important thing, and it was when I when I started seeing that, like the open source models were getting better and better that you know the. The concept of building venice really became obvious because two, two years ago the open source models were not nearly as good as chat, gpt they were. You could play around with them and they were interesting, but like they were obviously not competitive. And now, um, now they're very much in the same ballpark and some of the open source models are sometimes winning, in certain ways, even more than the closed source.

Speaker 1:

Yeah, I feel like it's so close now. It's like a horse race. This group just came out with a new one and that one is slightly ahead, but I got to be honest at some point. It's like the iPhone. At first, the updates to the phones were pretty big and the things were getting better and better, and now it's just like a new iPhone and the camera is a little bigger. I feel like we're already at a point where there's so much parody. Now you should be choosing on privacy and on non-censorship and actually getting interesting information and real, truthful information and not just information that they send you, Cause it sounds like, from what I've learned, is the the more a question is asked, especially if it's a touchy subject. You're saying, with a lot of these closed source ones yes, open AI chat, GBT does its thing. Then you've got the humans in the room like typing stuff and they're massaging is a kind word the answers to human biases yeah, yeah, it's a, it's a big problem.

Speaker 2:

Uh, interestingly, like, unfortunately, people don't care about privacy as much as they say they do. You know, like, if you ask someone on the street if they care about privacy, they'll almost always say yes, but what you really care about is, like, are they willing to change their behaviors in order to become more private? And, empirically, people are really not. Empirically, people go for fast and easy. Uh, all else equal. They prefer privacy, but they will sacrifice that immediately if it means something slightly faster or more convenient. So, like, we've recognized that with venice, that you know, the privacy angle is never going to be the selling point to get someone to change. What does get people to change, though, is that when they type something in to chat GPT and they get back an answer that they can tell is obviously doctored, obviously biased, it's insulting.

Speaker 2:

First of all. They just they feel insulted. They're like who did this? What are they trying to? Why are they treating me like a child? Um, they might be outraged that the information is wrong, or they might just feel like it's being so biased that it's kind of embarrassing, but they feel that and there's like an emotional aversion to it. And when they use venice and they ask the same question and they get an answer that's like much more reasonably balanced um. That's really what gets people to change their behavior, so that's been a an reasonably balanced um. That's really what gets people to change their behavior, so that's been a an interesting thing so with llama 3 and meta.

Speaker 1:

you know, I've one of my companies, uh, and we're slowly getting out of this because it's getting so painful is to do digital marketing, you know, on google, on facebook and whatnot, and the the amount of rules that get slapped on you for those things and the the difficulty they make it just to try and advertise that platform is crazy. So my point is meta and Facebook, with the last election, doesn't exactly have the best track record as far as open sourcing Lama for everybody. I mean, I know it's the best, I heard it's the best. Should we be concerned at all that Facebook? Meta is behind this, who has a terrible track record of censorship?

Speaker 2:

So I don't think that uh meta is releasing this frontier model out of some ideological commitment to openness.

Speaker 2:

maybe, maybe, but I'm not assuming the light um, yeah, you know, maybe uh much pragmatically from a game theory perspective. Um, meta just has an incredibly huge amount of like uh servers to train models on just sitting around that they can use, first of all and two, their big competitors are getting into the ai space and meta sees this as a way of undercutting their competitors at a relatively low marginal cost for themselves. So from a business strategy perspective it totally makes sense what they're doing. But no one can or should assume that meta will keep releasing these large open source frontier models like they are. They are expensive to train and they will become increasingly legally contentious.

Speaker 2:

The moment that it becomes too legally contentious to release such a model, meta will not do it anymore. Too legally contentious to release such a model, meta will not do it anymore. So hopefully that falls far enough in the future that people can use alternatives which will require decentralized training, and so if you're in the open source, decentralized AI world, the real key that needs to get unlocked is for decentralized, permissionless training. If that can be done with reasonable efficiency, then we can all feel pretty good about the future. But we're not quite there yet, so we need to keep hoping that Zuckerberg continues his work.

Speaker 1:

If they get pressure to pivot, do we at least get to use whatever model we have now to as the base, or would we literally have to start over with a new decentralized system?

Speaker 2:

um, great question, so I'll answer it in two ways. One is the what can we do question, and the other is what are we legally supposed to do according to like terms of the agreements?

Speaker 1:

can we? Can we hard for different?

Speaker 2:

questions um, so what can we do when the when the model is released, it is out there forever, right, you can't put the genie back in the bottle, like there's nothing that meta could do to stop people from using this new llama 3.1 and any derivation of it. So by releasing it, it essentially has set a new foundation for society. That open source AI for the world will never be worse than this level. They might not release another one and may not get better than that level, but at least that level is always going to be available and open to everyone, so that's great. Now, in terms of the other question, like what are we supposed to legally do if meta was like you can't use our model anymore? I, I don't know, but it would be a a useless gesture because people could use it without them knowing, and you can, because these are open source. You can, like, permutate them into other things, so I'm not too worried about that.

Speaker 1:

Cool, so we're off to a good start, is what you're saying? Disco, I know I've, I've hogged the mic and you're like you've, you've had like nine questions on your lips.

Speaker 3:

No, I'm good, I just I, just I. I think it's fascinating, you know, eric, when you point out that like the censorship or the bias is kind of the motivator right now. But I really think that people aren't really understanding like the intimacy that they're going to develop with what they're sharing with the AIs as they keep coming. So I do think that privacy will at some point become, you know, super, super, you know, to the forefront of thought on that, because it's really this kind of push and pull on that side of it. So that fascinates me.

Speaker 3:

I guess I'd want to ask, you know we're trying to be advocates for decentralization overall and you know we're trying to figure out, like what can people who really believe and kind of see this, you know, what can we be doing and what should we be, kind of, you know, looking to kind of move towards in terms of just trying to, you know, kind of spread the awareness of what's to come, because it's really, you know, people think it's oh, you're so early or whatever, but not really like it's happening and it's coming soon. So is there any thoughts on what we can do to kind of help to galvanize folks to you know start, you know banging the drum on this behalf.

Speaker 2:

Well, you could cover the topic on your um, on your, you know, live Twitter spaces and things like that.

Speaker 1:

We should do that. What are we doing?

Speaker 2:

yeah, I just I think just talking and showing people is really the the thing, right, like the best way of the best way of getting people interested in bitcoin wasn't ever really like uh, sitting down for an hour and discussing the ideological merits of it, but it was actually just to send someone like ten dollars of bitcoin and, uh, if it went to eleven dollars later, they would be hooked and, um, sadly, that's always been the best way actually of spreading. It is is just through that.

Speaker 2:

So I think just like showing people the old yeah, it was that. And then once someone, when someone's like, oh wow, I just made a dollar for For some reason their brain chemistry changes and they want to learn about the thing that just made them money and then they will be much more eager to hear all these cool ideological reasons for what they have. Showing people the alternatives, you know, like um chat dbt is still very new. There's a ton of people in the world that still don't use it for much stuff. It's still like a novel item. Um, and certainly alternatives to it are even more new and less common.

Speaker 1:

So just showing people how these open source tools work and um, that kind of thing, I think is all you got to do really yeah, I think you nailed it as far as, like, with bitcoin, just like the privacy you, you, you know, as we on board all the normies and we cross that chasm to just mainstream america and mainstream world.

Speaker 1:

The hardest thing is, like, how do you get the ethos that you and and, like I said, andreas and trace and all the early folks kind of you know, and satoshi and hal and all these guys taught us as, as we pass it along, I, I, you know, you want everyone to know and care, but you're right with privacy, like if privacy really mattered then we'd all be sitting here fighting for monero or Zcash, right, like where you have ring signatures and stuff super, super secure. So with AI, you know, without number go up technology, it sounds like you're saying it's just finding that use case in your first aha moment of like crazy guy with hair and ice cream cone and like a picture of Donald Trump which I wanted. I wanted a picture of Trump being an ice cream cone, you know, so I could etch it in my bench in my backyard.

Speaker 1:

But just just the the proof is in the pudding and getting real good information, whether it's historical facts and data or or current stuff, but to to be to be, like, censored at every turn. Like I, like I said, what amazes me most is how quickly it happened this year and it was right around the time you launched Venice, um, and it just like on a dime, you could not talk about or ask certain things and it just puked and I was gonna pull up the screen and show examples but uh, just fortuitous. But chat GPT, right now, at least for me. Maybe they know I'm doing a podcast with our huge audience, but it didn't work. It's down. Venice AI. Everybody is not down. Venice AI is working, just fine.

Speaker 2:

Well, that's good. We had a crazy traffic flood the last few days, so I'm glad to hear that.

Speaker 1:

Yeah, I saw, I saw your thing there, so we just have a couple minutes left with you. Eric. Any advice in terms of NSAI in, like you know, I know there's God mode and they're setting up prompts to uh, what advice would you give people, um, to get the most out of Venice, to use it the best with either of those or something else, or do you want me to ask it?

Speaker 2:

a question just to show off some of its fortitude.

Speaker 2:

Yeah, so from the highest level, it's used essentially for three things Doing text chat with AI, doing image generation with AI and doing code development with AI. Each of those categories has different models that you can use and the first thing to try is just you know, do the same question but with more than one model and see the differences in them. So right now, the two main ones for text are the noose theta, which is web enabled, and then the LAMA 3.1 405B. The noose theta, web enabled, is very fast and is connected to the web, which means like, if you ask a question that requires current information, like, oh, I heard ask a question that requires current information, like, oh, I heard, there was an assassination attempt on Trump, that's the model that will be able to like, scan the internet and give you answers of relevant things back. The LAMA-405b model doesn't have web access, so you would not wanna ask it about something recent, but it's trained on, essentially like the whole corpus of human knowledge, and it is a more intelligent model. So if you're trying to get a higher quality writing or a more interesting conversation on a fascinating topic, the 405 model will be slower, but it's generally better quality. So switching between those two is is important when you want to use them.

Speaker 2:

Um, in terms of like the image generation, each model is is quite different. Um, flux is the new one that we just added to image generation and it is really like the best ever. Um it is, it is cutting edge. It's it's as good as mid journey, which has always been better than any of the open source stuff. And finally, now, with this flux release, like open source image generation is is truly competitive.

Speaker 2:

You can play around with different styles and you know really just like experiment with it. This is like a new tool, so play around with things, understand how it works, and then what I've, what I've found myself doing is often just using this as my like search engine. You know, if I have a question, um like, I'm traveling right now, so if I have a question on a place I'm in, I'm going to open Venice and just ask it for something, cause I don't get bombarded with a bunch of ads and the response is faster. So, purely regardless of privacy, regardless of censorship, just using that as a search engine is really really cool and convenient. So that's the quick overview.

Speaker 1:

There you go, uh, dolly, not uh just showing me a guy with crazy hair, which, uh, which is partially accurate, but it's only maybe one 67th partially accurate of uh eating an ice cream cone and I put in Venice in a boat and uh, and I actually do have a prompt that, uh, I picked one of these cool like more black and white sketches. And, eric, I don't know if you know this, but since disco and I, um, basically since I got venice pro, we've been using your images for, uh, our topics of conversation at the top and we've actually absolutely just loved it. But, uh, I didn't even know about flux, I hadn't seen this one, but uh, yeah, this is yeah, flux went up a week ago.

Speaker 2:

Flux is absolutely amazing, especially for, like, photorealism. So if you go to the styles and you click, like, if you click none for the style to just get back to like normal and you try to do like a photorealistic something, it's really amazing what it can do. So, you know, in the future future we will add an API so that people can access this stuff algorithmically and, um, that'll be our next big thing. And then, you know, always trying to stay at the cutting edge of the models that are available and, uh, just scaling this up, um, in such a way that people can be private and uncensored and interact with intelligence from anywhere in the world.

Speaker 1:

Yeah, this is amazing Um disco. Do you have anything else for Eric?

Speaker 3:

uh, before we uh say no, I just I, just I, I I appreciate the time, but I really just appreciate you know building this and and alerting it, cause you know it, just you know, obviously light bulb and as you say, once people kind of you know see that and experience it. I know my son was all over um chat and once I told him I was like dude, you want them having that stuff. He got so scared he's like I deleted it right away. I got Venice.

Speaker 2:

And it's like.

Speaker 3:

So I really do think that it's uh, it's really just about awareness and you know, once they, once they start riding that bike, they're like dude, this is so awesome. I you know, and it comes that way. So I just really want to thank you for your efforts on this and it really, um, it really it really lit a fuse just within my own psyche of, like you know, being tuned into this and making sure that we can do whatever we can from our little small part of the smart, small part of the globe to have an impact. So thank you for what you're doing.

Speaker 2:

Thank you, it's been a it's been a good discussion. I appreciate you guys having me on. Yeah, absolutely.

Speaker 1:

Eric, it has been a treat. So everybody definitely check out Veniceai. If you get the pro version, it's like what is it? $49 a year, it's literally nothing.

Speaker 1:

You get the pro version and that will, uh, give you better search. Now that they're getting more congestion, you kind of get first dibs on your searches, and the watermarks on all the pictures and images do not have Venice on it, although the watermark actually does look pretty cool. So if you use it in your podcast, have the Venice logo on there so they know where they got it. But yeah, it's a no-brainer and you can pay with crypto. I think I paid with ETH, but you can pay with ETH, bitcoin. You can pay with both those. Eric, can you pay with any other cryptocurrencies or dirty fiat?

Speaker 2:

Yeah, eth, bitcoin Lightning obviously fiat through a Stripe credit card. Yeah, all the main ones Awesome.

Speaker 1:

Well, eric, appreciate your time everybody. I don't know if we gave any financial advice, but you have to do a disclaimer. There was no financial advice in here. But with that said, Eric, appreciate you and take care everyone. And take care everyone. Go check out Veniceai privacy, security and actually getting questions answered that are relevant and totally true and not re-censored. So take care everybody and thanks, eric, thanks.

People on this episode