In a dystopian future, the truth becomes an increasingly elusive concept in the Philippines. As "patient zero" of the global infodemic, the country becomes a hotbed for the proliferation of disinformation and misinformation, fueled by the advanced capabilities of AI and the unchecked power of social media platforms.
AI-generated misinformation, indistinguishable from genuine content, floods online spaces; Meanwhile, social media algorithms amplify its spread, creating echo chambers of deceit. Reliable sources are drowned out by the cacophony of unreliable voices, leaving the populace vulnerable to manipulation.
These technologies influence various sectors, from healthcare to politics. Health misinformation runs rampant thanks to wannabe wellness gurus and quack doctors. Many Filipinos often end up misdiagnosing themselves, succumbing to unnecessary anxiety. Political polarization deepens, democratic discourse suffers, and critical thinking dwindles as the lines between truth and falsehood blur beyond recognition. Post-truth politics reign supreme, with malicious actors manipulating public opinion for their own nefarious agendas.
As the fabric of truth unravels, the Philippines finds itself ensnared in a web of deceit, grappling with the consequences of a digital dystopia where reality is but a shadow of the truth.
Nowadays, influencers shamelessly promote fake supplements on social media, exploiting trust for profit. With deceptive tactics and false promises, they prey on the vulnerable, blurring the line between authenticity and deceit. In this landscape, truth is obscured by sponsored posts, leaving the masses vulnerable to exploitation.
Deepfake videos of politicians proliferate social media, spreading like wildfire. With alarming realism, these fabricated clips manipulate public perception, blurring the boundaries between truth and fiction. As a result, trust in media and governance erodes, leaving the Filipino populace vulnerable to manipulation and misinformation.
One major concern is the potential misuse of technology for misinformation, particularly with the rise of deepfakes and AI-generated content. This could further exacerbate existing issues with misinformation and historical revisionism, particularly regarding topics like Martial Law and the current presidency. The Philippines is already so...susceptible to misinformation, bribing, and vote-buying because of the structures that have been left there by colonialism and past political failures.
I guess in that [pessimistic] sense...in the form of AI, there will be much more AI stuff that will be used to deceive people...So [that's all]...[a] pessimistic [future of socialization] is just [that]....there'll be an explosion in AI that could be possibly used for scamming.
...[W]e're already seeing, for example, voice generative models. So you can take 20...minutes of someone's voice and then generate arbitrary sentences, using it like an arbitrary script. Imagine, if you can just create an entire interview of some politician; it's all fake, but it looks so real -- at least real enough to be real to more than half of our voting population. I mean, that's not even. I'm not even talking about something that is not yet here...it is already a thing. If anyone has paid attention in the past couple of months, we are already living in that reality. It is already a thing. Right now, companies are using it for games and and whatnot, but it only takes a bunch of malicious actors to to adapt that system to political purposes. So yeah, I have no idea how to even begin mitigating that or solving that. But that's why we also need to invest a lot of resources in like figuring out AI governance, and just just preventing all these things from getting out of hand.
For example...[when it comes to] politics. I found out my boyfriend's family are [Bongbong Marcos (BBM)] die-hards. Then they also found out that I shared a Leni post [on] Facebook, and my boyfriend told me that their family was talking about it. They were kind of laughing at what I posted, and then they were saying that I don't know anything about what I shared...how did I get to that level? Because of technology, there is a much wider political divide, and it's harder to make statements online. In entertainment too, as I mentioned [in my] optimistic scenario, I only want to see what I want to see. So that could also apply to his parents; maybe they are only seeing BBM support posts, whatever Duterte posts or...what is that called again? Whatever Thinking Pinoy posts, that's all they would see. Because of course, it's addicting and it self-validates us; we want to feel good, you know, we don't want to see something that would trigger us. So I think that would be the downside of my optimistic scenario. Yeah, right now I'm also feeling it...people are cutting ties because of what the algorithms are feeding them or recommending them. So we have to be careful with the way we train or the data we feed these models.
...[N]ow, we have the machine use of social media, [which] can influence the people at such a fast scale...and automate these things greatly...I think the core thing that computers could redevelop in probably next 20 years is the scale of which information disseminates. Many years ago, if you wanted to talk to someone in the US, or if you want to talk to even just your friend...you have to meet them physically, [and] call them. Now, they're literally just a Facebook chat or Viber chat away. So that accelerates the processing of information, but also makes it easier for bad actors to disseminate wrong information. That's really how the misinformation [and] disinformation machine [has been growing in] the past 20 years...And so in the next 20 years, we will probably see an acceleration in how much easier it is to influence...other people's beliefs, politics, and society through tech. Of course, we can literally shape realities that AI will probably only further blur the line between what's real and what's not real.
In the next 20 years, AI will probably further blur the lines, but streamline what's not. Already now, I think we receive edits of things that can potentially destroy democracy for groups of powerful people. That will probably [reveal itself in politics]. For [the] US election...you will see how AI will increase disinformation [and] misinformation. And so I think we are divided by the landscape, but only...those with massive [resources] can make use of the technology that they can afford, in order to influence millions of Filipinos into believing things that aren't true. That even goes, for example, our habits of content consumption. In the last 10 years, we've gone from consuming things primarily via TV and radio, to consuming things via the Internet. If there's no new way for fake things to get vetted, like how in TV and radio, there were large institutions to do that vetting. Now, in the age of the Tiktok influencer, we actually do have just small-scale vloggers who are aligned with massive corporate/political beneficiaries who could easily disseminate their information...So again..[while] a large part of the change that we have right now...is definitely because of the Internet, and the explosion of information has allowed so many things that were impossible before until now...there's also the implications of how, because there's a lot more emphasis on user-generated content, fake news, disinformation, and gossip to spread more rapidly...
...AI is kind of scary now, because it could make decisions based on just like seeing something...like you don't need a human to pilot it anymore. So they have no sense of guilt or remorse. So at least, you know, for the Vietnam War...which is a kind of a good example because it was the first mass televised war ever...people felt really guilty for doing that. But then, if you have AI to do it, you won't feel the guilt because you weren't doing it yourself...it's like: "Oh, it's not me, it's the AI." I mean...it's not the AI, [it's] the people who made the AI. But then what if the AI writes itself at that point? Because it's kind of scary how they can just take [appearances]...you know, with Meta, like the celebrity AIs, like with the Kendall Jenner and Snoop Dogg AIs...they're already taking people's physical appearances. So that AI acts with the skin of Kendall Jenner, and it's making decisions already, but it just looks like Kendall Jenner...it's scary.
Ang worst case scenario sa 2040, lalo nating tuturuan dapat yung mga bata na never trust just one source. Lalong mapapaigting dapat yung skills nila to question everything they know which can be a good thing or a bad thing.
It's so easy to convince people of something that they're not. And this even goes for a health and wellness. It's [an already] existing thing if you think about it. If you scroll through TikTok or [Instagram] reels, you [can go] like: "Oh yeah, all these people have the same experience as me and they're using experience as me, and they're using these things. Therefore...I need it...maybe I do have that sickness..." [Social media as a health information source] is [both] a pro and a con. It's a pro, because sometimes that's the way people actually find out about important things when it comes to their health. But it's also a con because...what if you don't have it? And you're basically accidentally gaslighting yourself into having it?...