top of page

From MacPaint to Artificial Intelligence

Like previous articles I have written, this article includes autobiographical details of my life that I feel are informative to understanding the more broadly applicable concepts I am attempting to communicate. What may be over-explaining to some readers, may be necessary context for others. Having lived my own life and my own experience, and having some reason to believe my life experience is fairly far outside the norm, I consider it necessary to provide these personal details in order to communicate my perspective effectively.


Most of what I did not learn through formal education, I learned by self-study, listening to podcasts, reading books and articles, and through conversations with peers and mentors. While much of my research has involved studying fringe sources, I have always relied on traditionally trusted sources such as industry specific experts, government agencies, reputable media outlets, and think tanks to provide the basis of my understanding of reality. I'm willing to entertain unusual ideas, and suggest areas that may be worthy of more intensive formal scrutiny. I try to avoid favoring a hypothesis unless I feel there is a significant body of legitimate sources to justify supporting it. It is my intent to return to this and my previous articles at a later date to insert some reference links, both as citations for some of my more outlandish sounding statements and to provide good references for anyone interested in learning more about those subjects. The most salient single resource I can refer readers to at this time is the Gigaom Voices in AI Podcast. The host, Byron Reese, has interviewed as of this date 85 experts in the field of AI research. Although it is a significant investment of time, I highly recommend anyone wishing to brush up on the state of the field listen to the entire series. AI research is not strictly limited to computer science, the field is highly interdisciplinary. I would call reader's attention especially to the cognitive neuroscientists and philosophers Mr. Reese has interviewed, as they frequently discuss some of the more esoteric sounding concepts I may mention in this article.


Despite the fact that most of education, work experience, and self-study has been directly or tangentially related to computers, programming has never been something that particularly interested me. I was raised by academics who in turn worked in fields directly or tangentially related to computer science. It is perhaps an experience best understood only by other children of academics just how much endless lecture one is subjected to as the child of a college professor. Suffice it to say, I was fully versed in the fundamentals of computer science and was experimenting with writing actual code by the time I was in 4th or 5th grade.


MacPaint came bundled with the Macintosh 128K in 1984

I began learning graphics software with Macintosh's MacPaint on our family's Macintosh 128k. My first attempts at programming were with the HyperCard and SuperCard development environments. At eight or nine years old, I found programming boring, tedious, and found that most of the things I could think of to write programs for already existed as commercial products. Instead, I focused my attention on learning the software itself, exploring, trying different features, doodling, drawing, making character sheets for role playing games. Over the next few years and computers I later taught myself PixelPaint, Aldus Freehand, and from it's earliest edition Adobe Photoshop. I was forever annoying my parents for refusing to read manuals, preferring instead to learn through trial and error, and for using hardware and software in creative ways for which it was not intended by its creators.



ReBirth, released in 1998, was the Propellerhead's predecessor to the popular Reason DAW

In the late 90s I discovered synthesizers, audio recording software, and the some earliest DAWs. I made music for more than ten years, never seriously considering it a career, I always regarded graphics work as my primary interest and career path, music was just a hobby, but I found that most of the skills acquired from using graphics software transferred easily to audio software. Once you know how to teach yourself a piece of software, learning new applications is fairly simple even if they're a different medium. I dabbled with video editing, 3D rendering, and animation, found it simple enough to learn but the long render times annoyed me compared working with 2D graphics and audio. That much seems not to have changed, I still periodically dabble with video and find it obnoxiously cumbersome.


I've been consistently approaching computers with the same philosophy since elementary school, they are a tool that's intended to be used to perform tasks. We can't all be programmers, or computer illiterate. The intent when computers began to enter our homes and workplaces was that most of us use software, not just make software. I made a conscious choice that writing software by hand was not a priority for me, and that I wanted to be competent at learning and adapting to new software in order to be able to perform whichever tasks were necessary in my future career.


It's difficult for me to explain verbally, but I have an intuitive sense for what a piece of software is and isn't allowing me to do. I sometimes explain that you could remove all the labels on sliders, controls, and settings, and it would still be just as easy to map out what everything does and the limits that are placed on certain inputs, the range in which allowed inputs are useful, etc. I suppose I can construct a simile by means of explanation. As a user, without hacking the software, without access to the source code, you're like man in a dark room full of obstacles. The developer expects you to behave in a certain way, and has put those obstacles in your path to prevent you from wandering off in the wrong direction and has given you a cane with which to navigate the room and to the door on the opposite side. Instead of trying to walk across the room, as the developer intended me to do, I swing the cane around and very methodically construct a map of the room. I remember where all the obstacles are, and instead of just walking across the room to the door, I figure out where I have room to play in the open spaces I've located.


It is my opinion that the hard divide between "coders" and ordinary workers is an artifact of the era when computers were rare, expensive, and processing time was at a premium. The older generation of workers who are now retiring or near retirement, entered the workforce at a time when if they were not a programmer themselves they were not allowed to touch the computers and that nearly phobic fear of breaking something remains in many of them to this day. Industrial designers chose the bland beige of early desktop PCs deliberately to try to make the machines less threatening to workers who were increasingly expected to use them directly on a daily basis. Since my peers and I were in grade school, the trope of asking adults if they have tried "turning it off and back on again" to solve most minor IT problems has been so commonplace it became the staple recurring joke of the British television series The IT Crowd. Most younger, tech literate, workers are quite familiar with the daily experience of being interrupted several times a day to help an older worker reboot their PC or force quit a program. As someone who's primary interest was in 2D graphics work, with a hobby level interest in audio software, it has been an endlessly frustrating experience that many older workers, including the gatekeepers to my career advancement, presume that programming knowledge is necessary and synonymous with basic computer literacy, or the ability to use professional software for needed work related tasks.


As the world of computing shifted to web-based applications, social networks, and content aggregators, I approached it in much the same way. I have very little interest in creating my own version of any of these products, I just want to know how to use them, fiddle with settings, bang things of limits, think of creative new ways to use them that they weren't necessarily intended for but that are useful.


Most casual users of social networks adapt their behavior to cultural norms imposed by the site's creators through branding, marketing, and the culture of the users that's organically arisen. There's no actual rule or Terms of Service (ToS) violation on Facebook saying you can't post 75 times a day. People may complain, there's mountains of blog articles about how the best practice is so-many posts per day, but there's no actual rule. Facebook's newsfeed was originally chronological, so if you posted frequently your content would fill most of the feed of your friends. When they adopted an algorithmic news feed, I felt out the limits of that algorithm and maximized my behavior to maximize how often my content would be in the news feed. My own personal account was just a sandbox to experiment and find limits, which gave me insight that I could then apply elsewhere in a much more controlled and measured way. During this process I was fully aware it would annoy my "friends." When the Facebook changed the algorithm, it would take me a little while to bang around and figure out where everything was. Again, never straying outside the bounds of the ToS, breaking rules, hacking, using scripts, just driving the software as hard as I could to find the limits and boundaries.


My conclusion was that no matter how locked down a social network or content aggregator, and those terms are now more or less interchangeable, you can ultimately drive it to do what you want it to do given enough time and patience. Only strictly curated websites with no user inputs are immune to this type of experimentation. I have noted a common sentiment in the traditional field of public relations that curated content is preferable, probably because the field depends on being able to influence a narrow band of media outlets. These professionals and the organizations they represent seem to have a strong preference for constraining the Internet to strictly controlled curated content.


The field of public relations came of age in the era of newspapers, radio, and most importantly television. These were top-down broadcast mediums. The staff of the publication or station would select what stories to cover and how to cover them and the message would travel one way, from them to the a passive audience. The primary profit motive of traditional media outlets is to attract and retain sponsors who fund the organization through advertising. Public, private, and not-for-profit organizations through the use of public relations techniques, (press releases, press conferences, interviews, etc.) would attempt to influence media coverage, to gain as much positive exposure as possible for their organizations, to promote coverage of stories complimentary to the missions of their organizations, to mitigate damage from negative press about their organizations, and to mitigate damage from coverage that was contrary to the mission of their organizations. A not-for-profit food bank might issue a press release to a local newspaper announcing a new initiative to attract donors and volunteers; a banking executive might ask to be interviewed by a local radio station about a proposed piece of legislation that would benefit the banking industry, and by extension their bank; a police department might hold a press conference to refute accusations of prisoner abuse; or a car dealership might threaten to pull its advertisement from a local television station if the station doesn't downplay coverage of a recall scandal that's damaging the brand of one of the makes it sells. Public relations professionals, working as in-house staff or in a consulting role, leverage strategies such as these to ensure positive outcomes for their clients. They know them well, they have been effective for decades, but they can be ineffective or counterproductive in the context of the new media environment of the Internet.


The Internet was designed from the beginning to be a distributed, redundant communications network, specifically to make it resistant to the widespread destruction of global thermonuclear war. It does not lend itself to a top-down broadcast model. As problematic for authorities as it can be to have a global distributed peer-to-peer communications network, it is impossible to clamp down the distributed nature of the medium. Its very structure precludes this possibility, it is inherently designed to bypass barriers to information transmission. The only way to stop the Internet from being the Internet would be to completely destroy it. This would require extraordinary means the nature of which I will not speculate on. The Internet is not simply used for the activities we as individual members of the public normally see on a day-to-day basis, it also forms the backbone of a huge number of invisible processes that are necessary for our modern industrialized society. Most of the data transmitted through the network, and stored on its servers, is not content humans consume, but machine language used to control and manage industrial processes, power grids, logistics systems, etc. Destroying the Internet would be catastrophic and probably end civilization as we know it, therefore it's unlikely to be considered as a real strategy for establishing "control" by any except religious zealots and the terminally stupid.


So, as much as it may frustrate the public relations representatives in nearly every traditional organization in the world that their normal techniques for managing media coverage are now ineffective, and as powerful and influential as some of these people are, and as convincing to policymakers and politicians as they can be, we cannot go back to the world they once dominated. This may frustrate them, but it simply isn't an option. We cannot choke down the Internet and control what people say to each other, and we cannot turn off the system. We have to find another way to adapt our organizations and continue to fulfill the roles they play in society.


By now, most readers will be well aquatinted with how algorithms have been used by websites like Facebook, Twitter, Reddit, and YouTube, to curate newsfeeds and maximize profitability for themselves. Facebook and YouTube are probably responsible for some of the most catastrophic outcomes, mostly because those platforms are so widely adopted and influential. In the past, many extremely popular websites have made unpopular changes and experienced a diaspora of users to other platforms. In the last 20 years I have seen a wave of migrations. "Social networks" like AOL, Yahoo!, Friendster, MySpace, YouTube, Facebook, Instagram, and Snapchat all had their moments of popularity with trend-seeking young people. Content aggregators like Digg, StumbleUpon, and Reddit were once viewed as somewhat distinct from "social networks," and they experienced their own parallel waves of youth popularity.


I would argue that overall there isn't much difference between early internet bulletin boards systems (BBS), Internet forums, content aggregators and modern social networks. Most of the differences are in the marketing and branding the platforms used to attract the attention of users. Overall, the migration pattern is fairly obvious. Users are attracted to a new platform in its early years because of the novelty, the catchy branding and interface, and the freedom young websites often offer. As these sites become more and more heavily used, and the pressure to monetize increases, they gradually phase in various attempts to monetize through advertising. With advertising, comes sponsor pressure to control content and limit content that damages the brand directly or that they brand doesn't want to be associated with. When this pressure reaches a certain point and users feel they experience has been compromised, too many ads, too many restrictions on content, they leave the site. When this happens it often happens rapidly and in huge numbers.


In August of 2010, Digg launched version 4. This included changes in the site that were a direct response to the phenomena of very active users having a disproportionate impact on the content of the front page. Overnight, the tone of the front page content changed, enthusiastic active users Digg felt were disproportionately influencing content protested, then abandoned the site. What followed is colloquially known on the Internet as the Digg Exodus. Millions of users abandoned the site, and flooded over to Reddit, a similar content aggregator that had long gone somewhat ignored because of its utilitarian interface design and insular user culture. Digg was effectively ended as an influential website. Reddit went from being an obscure internet backwater to the powerhouse it is today over the next few years.


Pinterest, ostensibly an online analogue to the scrapbooks young brides once used to plan their weddings, was branded to target at young brides planning their weddings. A glance at the front page of Pinterest in 2011 would have presented you with a page primarily composed of floral arrangements, wedding invitations, centerpieces, and gowns. However, there is nothing inherent to Pinterest technology or ToS limiting its use to young brides planning their weddings. It was implied in their brand and original marketing strategy, nothing more. I began using Pinterest sometime in 2012-2013. I'm a guy, and wasn't planning a wedding, so I made boards for cars, robots, survival equipment, and other things I was interested in. At that time, I didn't see an awful lot of other users doing the same, my feed was my own posts peppered between floral arrangements and wedding invitations. I thought it was somewhat humorous at the time. Now in 2019, the Pinterest feed is alive with cars, robots, and survival equipment, in addition to their bedrock of wedding planning users. Interestingly, those original users now going on a decade later have moved on to use the site for parenting tips, nutrition, exercise, gardening, all the things they went on to do in the years since they got married. The point being, nobody at Pinterest in 2011 would probably have predicted their site would have a thriving community of people collecting concept drawings of futuristic sci-fi characters eight years later.


Anyone with a computer science background can explain why there isn't really much difference between any of these sites. While the technologists understand this, the executives, marketers, and investors of these sites don't understand that they cannot force users to behave in a narrowly constrained way through branding exercises. Actual human beings, are messy.


There was an illusion in the era of television that American domestic life was safely constrained and normative, perpetuated by the fact that the media was controlled by a very narrow segment of society, that the messaging was structured very carefully to support those norms, and it only went one way. Television networks broadcast sitcoms about happy normal American families, safely insulated from the view of reality on the other side of the screen. The fact is, things were never normal and happy. Our nostalgia for the cheerful and innocent 1950s of I Love Lucy, or the goofy pot-smoking slapstick 1970s of That '70s Show are not based on the reality of average American experience during the times those shows were set. We didn't live in a world where when the media said how things were, they were immediately confronted with conflicting views of reality. Much of middle America has been gradually crumbling over the last 40 years, but it wasn't until smartphones and social media started documenting the reality of the average American experience that the reality of this decay became impossible to ignore. It didn't happen suddenly, we simply became aware of it suddenly. The Internet didn't cause society to move from order to chaos as we left the era of television, it was always in chaos we just weren't aware that other people were experiencing the same chaos. Television provided us with the illusion that these happy imaginary worlds were the norm, we must be abnormal if we aren't happy like them. It conditioned us to hide this abnormality from each other, each suffering silently and smiling plasticly in public.


It's not really not realistic to expect people to stop being upset, messy, depressed, anxious, suicidal, or angry. We can't deploy law enforcement to beat people into submission for not conforming to some sitcom ideal of reality, for complaining on social media about their personal problems or being angry if something unjust happens to them. Rest assured, there is a certain segment of the population who can and do call law enforcement and report people for simply being emotional, and it can and does have catastrophic effects on people's lives when they do. The generation that ruled during the age of television conceptualizes of new media the same way they conceptualized about the old. They don't seem to understand that unlike the golden age of TV, when if something was on TV everyone in the country saw it, not everyone sees everything that happens on the Internet. So if they see something that upsets them, they react as if everyone is seeing it, the trope of "delete this or I'm calling the cops, my grandchildren use the internet." The idea that life was happy and normal isn't a very old one, only as old as mass media. The illusion of control over the chaos of reality was always an illusion. The Internet just provides us with a constant real-time reminder of just how chaotic and terrifying reality can be. I should point out it is punctuated with joy and beauty, and that to control that chaos, to choke it off, would be to control the very chaos that gives birth to the most transcendent moments of the human experience.


Choking it off appears to be the instinct of the old guard, but there is no way to do it that doesn't involve firebombing civilization itself. The rest of humanity cannot allow elderly media and PR barons to destroy us because they refuse to pass gracefully into obscurity and admit their age is over. We cannot conduct our lives without using the Internet, or by only presenting some fantasy societal norm in the photos, videos, and written content we generate. Somebody, somewhere, is going to see something about you they do not like. They are not entitled to firebomb your life or livelihood because they don't like your haircut. You are, at any time, perfectly free to look away from someone's online activity if you don't like what they're saying or how they're living their life. Social media has given us a window onto each other's lives, and we often don't approve of what we see. It's given everyone a voice, and we often don't like what other people have to say. While most people are aware of the algorithms platforms like Facebook use to curate feeds, supposedly showing us what we want to see, those algorithms are really designed to maximize the time you spend on their site. Nothing keeps people on a social media site like an argument, so those algorithms are specially tuned to give visibility to controversy and conflict. We often refer to algorithms like this as AI, but there's no HAL 9000 from 2001 style consciousness behind them. There not only is no "it" making a conscious choice to inflame conflict and increase divisions in society, there is no "them," no conspiracy to do so. It's simply a function of the profit motive of the corporation and its investors, trying to monetize their site through advertising and maximize the time users stay on the site so they will see the advertisements more. Your interpersonal drama is the soap opera, their ads are the commercial breaks, the algorithm facilitates and exacerbates your interpersonal drama, you stay on the site more and view more ads, people watching and participating in your interpersonal drama stay on the site more and view more ads.


I heard someone say recently that narrow AI like this "sends us hurtling in the direction we were already going." Whatever process we apply these systems to, we accelerate it by applying them. That can be good, like accelerating the speed by which labs can process and detect cancer cells from thousands of patients. Or it could have disastrous consequences, like the genocide in Myanmar that was more or less directly caused by rumors spread on Facebook. These technologies are widely available, are becoming more powerful by the day, and quite literally have the potential to destroy civilization, humanity, or maybe even life on Earth itself. With other destructive technologies, like chemical or nuclear weapons, it's possible to utilize public policy to restrict them falling into the wrong hands. It's possible to physically restrict access to the technology, to restrict the availability of precursor components or materials, to use surveillance and intelligence to monitor for groups or individuals attempting to plan an attack. The public sector, through law enforcement, intelligence, policymaking, does its best to keep us all safe from the threat of a bad actor with a weapon of mass destruction. It requires a lot of infrastructure and expertise to spool up a nuclear weapons program, it's difficult to do it in your garage unnoticed. Anyone with a smartphone or laptop can develop an AI system, for malicious or altruistic purposes, that could produce globally catastrophic outcomes. The leverage on systems created by the use of narrow AI systems by unsophisticated users, even something so apparently mundane as a social network, allow individual users to have profound effects either inadvertently, maliciously, or through unintended outcomes. A single phrase can spread virally and create a political movement. A single photograph can start a riot. As I've already explained, it is not possible to shut down the Internet as a response to this threat. We have to find another way.


In the field of Artificial Intelligence the type of HAL 9000 style intelligence, the AI most people imagine when you say "AI," is known as as an Artificial General Intelligence (AGI). As of the writing of this article, no party on Earth has officially announced the development of a true, conscious, generalized artificial intelligence. Estimates for how long it will take to create range between 5 and 500 years, if it is in fact possible. It is generally agreed by most experts in the fields of computer science, cognitive neuroscience, and philosophy, that such an intelligence is possible. It is generally agreed that the intelligence and self-awareness that we experience as human beings is substrate independent, that the biological makeup of the human brain is not necessary for such an intelligence. In short, it is not really a question of if an AGI will be created deliberately, or possibly emerge on its own, but when. It is also generally agreed that such an artificial creation would rapidly exceed the capabilities of any human mind. Experts speak often of such a consciousness rapidly expanding from the IQ of a child, to an adult, to orders of magnitude more than the combined intelligence of human who has ever lived.


The intent of this article is to explain as succinctly as possible where I came from and how I ended up dedicating so much of my life's work to unpaid independent research into the field of artificial intelligence. I dropped out of college just short of earning my Masters of Public Administration (MPA) in 2010 to take a $14/hour marketing job. I had a baby on the way and taking a mundane job seemed like the prudent thing to do at that time. My intent had been to write my thesis on the accommodation of veterans suffering from PTSD and TBI in the pubic sector workforce. I continued independently to research the key elements of my strategy for addressing the problem, much of which branched off into seemingly unrelated and doubtless "crazy" sounding subjects. When you started dealing with veterans with PTSD it doesn't take long before you're running into tales of homelessness, drug addiction, violent crime, recruitment by gangs and extremist groups.


When I realized some time in 2015 that the solution to many of the tangental issues preventing progress with veterans could be addressed by the application of currently existent narrow AI systems, I became a vocal advocate of deploying those systems. Specifically to identify, deconstruct, and reduce the capacity for gangs, organized crime, and extremist groups to operate. These systems applied to specific security concerns, like the identification and disruption of human trafficking networks, have produced tremendous results. It is not difficult to imagine new ways the same technology can be rapidly deployed to cut out many of the cancers of our society.


While I initially began studying narrow AI systems as a tool to fight some of the aforementioned battles, I quickly became aware the same tools were being deployed in a completely haphazard manner across all aspects of society. The broader issue of where and when to deploy narrow AI systems in a measured way to prevent catastrophic outcomes is generally termed "AI alignment." There have been many opinions offered as to how to regulate this technology, or if it should or even can be regulated. I arrived at the notion that the only way to manage the increasingly chaotic deployment of "dumb" narrow AI systems was the creation of a generalized AI to manage it all. Most of my research since then has focused on conceptualizing how such an entity might be created, or preparing a response if such an entity was an emergent uncontrolled phenomena. From my background in public policy, it became clear to me that it was necessary to attract the attention of policymakers, scientists, and other experts and draw attention to the broader policy implications of AI alignment, both in the deployment of current narrow AI systems and the development of generalized AI. My background research in the subject was not intended to prepare myself to develop AI systems directly, but to be able to point policymakers, governmental agencies, and NGOs in the direction of the appropriate narrow subject area experts and serve in an advisory role in preparing an appropriate policy response to a rapidly proliferating technology. It has been my stated opinion that hierarchal organizations lack the response time necessary to address this issue before it becomes an existential threat.


Yes, I realize that I come off like a ranting lunatic sometimes, but there is every reason to believe time is of the essence. As a society, if we can negotiate this transition I imagine the coming decades will be peaceful and prosperous. I feel a sense of urgency, but also one of optimism.

10 views0 comments

Recent Posts

See All
bottom of page