Projects – CHM https://computerhistory.org Computer History Museum Fri, 15 Sep 2023 01:09:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 Technology + Art https://computerhistory.org/blog/technology-and-art/ Tue, 24 May 2022 15:53:08 +0000 https://computerhistory.org/?p=25204 Art and computing grew up together. CHM is showcasing a selection of early computer films made between 1963 and 1972 in its in-house theater and online from June 1 to September 1, 2022.

The post Technology + Art appeared first on CHM.

]]>
CHM Showcases Early Computer Films, 1963–1972

From the earliest days of digital electronic computers, people have used them to make art: from pictures to poems, screenplays, and literature, from music to films. From June 8 through September 1, CHM is exhibiting a selection of early computer films made between 1963 and 1972 in its in-house theater, the Screening Room. The exhibit, “Early Computer Films, 1963-1972” presents twelve films, mostly shown in their entirety, forming an hour-long program and running continuously during the Museum’s open hours.

“Simulation of a Two-Gyro Gravity-Gradient Attitude Control System,” Edward E. Zajac, 1963. Courtesy AT&T Archives and History Center.

In this decade from the start of 1963 through the end of 1972, engineers, scientists, and artists developed the new technology of computer animation, that is, using computers to make films. Some of the films selected for this exhibit were created by computing professionals, to document advances in computer animation itself, and to use computer-animated films for technical visualization and communication. Most of the selected films, however, were made by practicing artists, who created computer animations and incorporated them into their filmmaking. These remarkable artists would, for their use of computer animation, become recognized as some of the most important early figures in computer art. By 1972, both the art and the technology of computer-animated films had changed dramatically, and were poised to have their future tremendous influence on the movie industry, the gaming industry, and fine art.

“Halftone Animation,” Ed Catmull and Fred Parke, 1972. Collection of the Computer History Museum. Courtesy Ed Catmull and Fred Parke

Partners in Art

While it was computer researchers who were the very first to develop computer animation, this new medium powerfully blurred the lines between technologist and artist. Many early computer researchers who made computer animation films, like Ken Knowlton and A. Michael Noll, developed into artists in their own right. Practicing artists, like Stan VanDerBeek, Lillian Schwartz, and John Whitney, had profound influences on the development of the technology of computer animation, as their adoption of it led to new features and ideas.

“Pixillation,” Lillian Schwartz, 1970. Moving Image from the Collections of The Henry Ford.

Very few artists had access to computers in the decade from 1963 through 1972. At that time, computers were expensive, large, and rare in comparison with today. The mainframe and minicomputers of this era were almost exclusively owned by organizations—businesses, laboratories, government units, and universities—not by individuals. Nevertheless, artists were able to use computers for their artmaking, including filmmaking with computer animations. They did this through partnerships, forging relationships with computer professionals in government, business, or education. Through these partnerships, artists gained access to computing resources and expertise.

“Matrix III,” John Whitney, 1972. Courtesy of Whitney Editions, LLC.

At the opening of the 1960s, it became increasingly clear to computer professionals that they could develop computing into a medium for much more than powerful calculations. They were developing computers into systems for reading and writing and for producing sounds and images. And software held the promise for doing these creative activities in new ways using algorithms and automated processes. Artists and computer researchers alike were attracted to the exploration of the limits and possibilities of this new creative medium, and its implications.

“Poemfield, No. 2,” Stan VanDerBeek, 1966-1971. Courtesy of the Stan VanDerBeek Archive.

Initially, computer art received a very frosty reception from much of the established artworld. Many critics described the works as cold, simple, and lacking expressiveness. Often, these critics found that computer art’s deficits showed the importance of human creativity and expression. However, other critics, gallerists, and observers were much more receptive to the idea of the computer as a new tool for artists and their creative expression. In the 1960s there were several important gallery and museum exhibitions of computer art in North America and in Europe. Today, these early artists and exhibitions are highly regarded, and are seen as milestones in the development of computer art, digital art, and media art.

“Hummingbird,” Charles Csuri, 1967. Courtesy of the ZKM Center for Art and Media and Caroline Csuri.

The Art of Programming

In the case of computer animation, technology offered artists a way to precisely define, and to automatically generate, moving images. Rather than being drawn or painted by hand, through software artists could precisely define the creation of images, their sequence, and changes to them. They could define rules by which the computer created patterns. These capabilities of precision control and generative possibility opened a vast space of possibility in computer animation that is still being actively and excitingly explored today. Important too was the sense of surprise and discovery that these artists experienced in this early period of computer animation. Most often, they could not see the results of their programs in real-time, having to wait long periods to see the animations. Decisions and details in their programs frequently led to surprising results, even glitches that proved interesting or beautiful. For many of these artists, both these aspects of control and surprise led them to desire a much more interactive, real-time ability to create computer graphics and animations.

“Computer-generated Ballet,” A. Michael Noll, 1965. Courtesy of AT&T Archives and History Center.

After 1972, artists and computer professionals developed the computer into a powerful tool for almost every kind of artmaking. In this, it has helped to break down barriers between different forms of art, and encouraged the mixing of different forms as in multimedia art. In computer and digital art, the computer has provided a new medium for artistic expression. In many cases, the computer has made it easier to reproduce and share art. Together, these changes have led to an exciting broadening of artistic possibilities.

“Matrix III,” John Whitney, 1972. Courtesy of Whitney Editions, LLC.

Learn more about the exhibit.

View all twelve film clips on a playlist.

Main image caption: “Affinities,” Lillian Schwartz, 1972. From the Collection of the Computer History Museum. Courtesy The Henry Ford

 

SUPPORT CHM’S MISSION

Blogs like these would not be possible without the generous support of people like you who care deeply about decoding technology. Please consider making a donation.

FacebookTwitterCopy Link

The post Technology + Art appeared first on CHM.

]]>
A Museum’s Experience With AI https://computerhistory.org/blog/a-museums-experience-with-ai/ Thu, 03 Feb 2022 16:49:01 +0000 https://computerhistory.org/?p=24034 Find out the results! We tested AI machine learning tools on a prototype subset of the CHM collection to see how well they improved access to our oral histories.

The post A Museum’s Experience With AI appeared first on CHM.

]]>
Lessons Learned From CHM

For decades, the Computer History Museum (CHM) has worked to collect, document, and interpret the history of artificial intelligence (AI). It has collected objects, archives, and software and produced oral histories. CHM has convened vital conversations about AI in its public programs, examined the history and present of AI from multiple perspectives in our blog and publications, and interpreted the story of AI and robotics to our visitors onsite and online in our permanent exhibition, Revolution: The First 2,000 Years of Computing.

With the rise of new AI technologies typically called “machine learning” or “deep learning” based on neural networks and large data sets, and the facility of these technologies in areas such as speech and visual recognition tasks, CHM staff have followed efforts by some of the largest museums internationally to use and explore these new tools. Concurrently, CHM has been sharpening ideas about and commitment to a new goal that we call “OpenCHM.” Its core strategy is to harness new computing technologies to help us make our collections, exhibits, programs, and other offerings more accessible, especially to a remote, global audience.

Our commitment to OpenCHM led us to join the new community of museums and other cultural institutions experimenting with AI, in service to our broader purpose of accessibility. We determined that such an AI experiment could help us learn how we should, or should not, design AI tools into our larger OpenCHM initiative. Further, we believed that given the culture of sharing and collaboration in the museum, libraries, and archives sector, an AI experiment by CHM could benefit the community at large. It is in that spirit that we here offer our experiences and lessons learned.

Putting the Grant to Work

For our AI experiment, CHM was gratified to receive a grant (Grant MC-245791-OMS-20) from the Institute of Museum and Library Services (IMLS; www.imls.gov), an agency of the US federal government created in 1996. In recognition that museums and libraries are crucial national assets, the IMLS gives grants, conducts research, and develops public policy. Of course, the views, findings, and conclusions or recommendations expressed in this essay do not necessarily reflect those of the IMLS.

CHM’s grant was part of the IMLS’s National Leadership Grants for Museums and supported a one-year, rapid-prototype project to create a simple search portal for a selection of digitized materials from CHM’s collections—especially video-recorded oral histories—and the metadata about them generated by commercial machine learning services. Our thought was that by having internal staff and external user evaluations of this portal, we could assess the present utility and near-future potential of the various machine learning tools for ourselves and the community as well as gain first-hand experience with the actual use of these tools in practice.

This project was also supported in part by the Gordon and Betty Moore Foundation (moore.org). The Gordon and Betty Moore Foundation fosters path-breaking scientific discovery, environmental conservation, patient care improvements and preservation of the special character of the Bay Area.

Leveraging Microsoft Tools

For our experiment, we chose to use Microsoft’s commercial machine learning services, currently marketed as “Cognitive Services.” Our reasons were several. Microsoft is supporting CHM with technology, including generous use of their cloud computing services (Azure) and productivity software as well as donations of hardware. Microsoft’s market-competitive Cognitive Services are part of their Azure cloud computing offerings. With our existing use of Azure, evaluating Microsoft’s Cognitive Services tools made good sense to us particularly, and their overall standing in the machine learning services market made sense for the broader museum community.

The project lead for CHM’s AI experiment was our Chief Technology Officer, and our IMLS grant was for $50,000, with CHM contributing an additional $15,000 of staff time. Of this, $30,000 was devoted to an external technology development firm and other technology costs. CHM applied an additional $20,000 from the Gordon and Betty Moore Foundation to development costs. Additionally, CHM used some of its donated allocation of Azure services to cover the cost of some of the cloud computing services (storage, database, indexing, web applications) and of using the Cognitive Services tools. With the difficulties of the pandemic, and the work itself (including evaluation and communication), we requested and received a six-month extension to our original one-year workplan.

Selecting the Sample Collection

One of our first tasks for the project was to select a corpus of materials from our collections to use for the experiment. We devoted considerable effort to this selection, choosing oral histories for which we had both video recordings and edited transcripts. Further, we chose oral histories that grouped into two topic areas—the history of Xerox PARC and the history of artificial intelligence—as well as a selection chosen for a diversity of gender, race, sexual orientation, and language accent. Additional audio recordings, scanned documents, and still images were chosen to add to the corpus that connected to these topic areas.

In the end, our selection process turned out to have been more subtle and nuanced than necessary. In this rapid prototype project, we were not able to take full advantage of our selections for topics or for diversity because the prototype did not offer that level of sophisticated use. We would have been better served by selecting a corpus more quickly, based primarily on representing the kinds of digital file types, sizes, and qualities held in our collections: a more technical than intellectual selection.

Designing a Portal

Our other initial task was to design our prototype portal, describing what we wanted to achieve so that it could be translated into a set of requirements for our external developer. The vision for the portal that we designed focused on:

  • How the digital assets would be presented along with the Cognitive Services produced metadata about them;
  • How faceted search could be conducted across the corpus and within the presentation of individual items;
  • A networked graph display as a discovery tool, presenting metadata connections between a set of items returned by a search query.

While this vision was translated into a set of requirements that our development contractor pursued, later we found that our process left important elements unacknowledged. For example, team members assumed that the existing metadata about the items in the corpus from our existing collections database would be attached to the items in the rapid prototype, along with the Cognitive Services generated metadata. By the time that we realized that we had not imported that data into the prototype, doing so would have required a cost-prohibitive reworking of the prototype’s database. In another instance, team members assumed that the prototype portal would have search features common to many commercial websites like YouTube or a Microsoft service built on Cognitive Services called Video Indexer. It did not, because those had not been explicitly articulated in the requirements setting.

Tackling Communication Challenges

Across the development of the rapid prototype portal, and the migration of our corpus into its data store, most of the work of the effort was in communication. Despite significant efforts and strong intentions, our team encountered difficulties in communicating across our own collections and IT groups, translating our goals and concerns into the idioms and workflows of the other. This is a well-known issue in the world of libraries and museums, perhaps one for which teams should explicitly plan, building in touchpoints, correction strategies, and most of all adequate time for meeting, discussion, and resolution. These issues were only compounded by the need to also work with our external developer firm. Here again were the same issues of translating across idioms, workflows, and organizational structures.

One benefit of these difficulties was that we needed to develop an approach, and workflow around evaluation early in the process to gauge the progress of the prototype development. This evaluation work evolved into our more formal efforts for internal and external evaluation once the prototype was deemed ready. Having a data scientist on our staff involved in the project from an early stage proved to be an important asset. Our experience supports the general best practice of involving evaluators in a project from the very beginning.

Learning As We Go

Our final phase of development of the rapid prototype was actually one of simplification. We chose to strip out features that were not yet adequately developed so that they would not become distractions in our evaluation work. In this, we focused on the primary goal of the project and the evaluation: To judge the current utility and near-term promise of using commercial machine learning tools for expanding access to our collection. Features that were not critical to assessing how machine generated metadata might serve this goal were stripped out.

One feature that could have been significant to this core assessment goal was removed: the networked graph view presenting metadata relationships between items returned by a search query. Despite some significant development of this feature, our budget of time and expense was inadequate to develop this promising avenue successfully. We remain convinced that such a view of search results and their interconnections could be a vital discovery and access tool, but it proved beyond the reach of our experiment.

An outline of the technical architecture of the prototype, from a design document.

The prototype portal gives users an interface to use Azure Cognitive Search to query an Azure SQL database. The database contains metadata about each of the collection items in the corpus, which are themselves held in an Azure storage container. A search query results in the display of the result items, divided by item type (video, document, audio, or image). When an individual item is selected for view, the machine-created metadata for that item is available to see and, in some cases, allows for navigating to the relevant position in the item.

For audio and video files, Video Indexer from Microsoft’s Cognitive Services was applied. The tool creates an automatic transcript of the recording, which our system treats as a metadata field. Video Indexer further extracts metadata from the transcript: keywords, topics, and people. Our system treats each of these as additional metadata fields in the database. Lastly, Video Indexer analyzes the audio to produce sentiment data, and the video to generate faces data. Our system treats each of these as metadata fields. For document files, in our case PDFs, we applied Text Analytics from Cognitive Services. From the text, the following metadata was extracted: key phrases, organizations, people, and locations. Occasionally, document files were mistakenly processed as static images, producing the metadata: image text, image tags, and image captions. These same metadata were produced for our properly processed static images.

A screenshot of the prototype, displaying the oral history recording of networking pioneer Bob Metcalfe. The automatic transcription, matched and timecoded to the video playback, appears at the right. The topic metadata “Ethernet” has been selected, resulting in the highlighting of the term in the transcription.

Here, the transcription has been automatically translated into Chinese.

Evaluating the Prototype

For our evaluation, we created surveys for both internal and external users and shared web access to the prototype portal. Because our focus was on our oral history collection, the prototype was geared toward adult professional users and academic researchers, as was our evaluation. Our internal users were staff from a variety of groups: collections and exhibitions; marketing and communications; education; and information technology. Our external users were historians as well as a variety of library, museum, and archive professionals.

In their evaluation of the current utility of the machine learning generated metadata about the collection items, our external evaluators were notably more positive than were our internal evaluators. For the video items, almost all of the external users found the extracted keyword and topic metadata from the automatic transcriptions to be useful, while much less than half of the internal users agreed with them. This same divergence held for document items, where every external user found the extracted person metadata to be useful, while only about half of internal users did.

Beyond person metadata, about half of external and internal users found the key phrases, organizations, and locations metadata useful, although internal support was consistently lower. For audio items, this same pattern held. Roughly half of internal and external users found the keywords, topics, and people metadata useful, with internal uses slightly but consistently lower.

Interpreting Results

Our working explanation for the lukewarm assessment of utility across the board, and the higher external judgement of utility, is that users believe that the machine generated metadata—the automatic transcripts, and the summary data extracted from them—are better than no metadata, and that external users feel more strongly that anything that allows expanded access to the collection has value.

Internal users, many of whom are closer to the human curation and production of metadata about the collection, are clearly more critical of the machine results. However, there were cases where there was unanimity among our internal and external evaluations. Almost every evaluator found that the descriptive tags and captions for still images, and the sentiment metadata for video and audio files were not particularly useful, while almost every evaluator judged the automatic transcriptions of video and audio items to be useful.

This image shows the automatic translation of a PDF of the transcription of graphics pioneer Alvy Ray Smith’s oral history into Hindi.

Lessons Learned

From the perspective of our collections group, the lessons learned from this rapid prototype for our OpenCHM goals are clear, direct, and very helpful. Automatic transcription of video and audio materials using machine learning tools will be our top priority in this area, followed by automatic translations of these transcripts into a set of languages. In the prototype project, we used Microsoft’s Cognitive Services “out of the box” functionality, without any customization or training. For our future efforts in automatic transcription and translation, we will strongly pursue customization of the machine learning tools by using available options to train them on our own corpus of documents and human-produced transcripts. Lastly, we will ensure that future systems allow the user to select between human or machine generated metadata, or a combination of the two. In all cases, machine generated metadata will need to be clearly identifiable.

From the perspective of our information technology group, the prototype project has also generated clear and useful lessons about process that should inform upcoming work on OpenCHM. The need to define, document, and agree upon all the requirements carefully is clear. In working with new technology, the goals and outcomes should be simplified and clearly defined. Developing an effective process for exploring the use of new technology, and learning clear lessons from this, is essential because new tools and capabilities are being constantly developed.

From my own perspective, the rapid prototype project was an extremely valuable experiment for CHM, and I hope for the larger community. We have a much greater understanding of the realities of using current commercial machine learning services in a museum setting and the limitations. Below is what I have learned as an historian and curator from working on the project and from analyzing the prototype that we have built.

Metadata

The machine-generated metadata do not, at this stage, provide much beyond that which is available through standard full-text search. This is true both for the expert and the casual user. Little of the machine-generated metadata offer insights beyond full-text search: The systems cannot readily identify anything not explicitly in the text itself. This includes person, key phrases, organizations, and locations metadata for documents, and keywords, topics, and people metadata for audio and video. That said, these metadata do serve as previews of the full text and are useful as a kind of browsing of the full text. An important exception to the limitation to the full text was found in the case of prominent individuals. The systems were able to associate the names of prominent individuals with topics not explicitly contained in the full text. For example, the system was able to associate the names “Steve Jobs” and “Gordon Bell” with the topic of “National Medal of Technology and Innovation,” and the name “Nandan Nilekani” with the topic “Chairman of Infosys.” Sentiment metadata is not useful, as it is mostly inaccurate for the videos in our sample.

The metadata produced by the prototype for a photograph of a person using a Xerox Alto computer.

For the purely visual items in our prototype, our 2D still images, the system did, of course, provide metadata largely unrelated to text. When text did appear in an image, the system did a very good job at identifying it and providing the recognized text as metadata. Image recognition provided metadata about the scene that was in the main too general for productive use as alternate text for screen readers and also noisy with false tags, likely due to the specialized content of the images in our collection.

Commercial Machine Learning

Throughout the project, it became apparent that the commercial machine learning services are primarily geared toward the needs of commercial customers, for uses in marketing, customer management, call centers, etc. They are tuned to the analysis of relatively short pieces of video, audio, and text, and to results fitted to commercial interactions. Our experience shows that the needs for interactions in the museum space—where accessing, assessing, and using information is paramount—require that machine learning services be appropriate for contending with larger video, audio, text, and image files, and have much more stringent requirements for accuracy, thoroughness, and overall quality. We hope that future experiments with using machine learning tools customized through training on our collection can address these stringent quality requirements.

Insights and Final Thoughts

From our experience, I believe an important lesson is to decouple a museum’s database from any specific machine learning service. Instead of directly tying the database to particular commercial AI services through APIs or other links, one should make space for the results of machine learning services to be imported into the museum’s database. Digital elements from, or representations of, the museum’s collection could be processed by the services, and the data so produced could then be placed in the database. This would be a batch process, rather than a live link. That is, one should create fields in the museum’s database where the results of changing or different machine learning services can be placed, and later replaced, much like human-curated metadata—like descriptions—are refined and corrected over time.

Machine Learning

The greatest promise for machine learning that I see currently from my experience with the prototype is for automatic transcription of audio and video items. These automatic transcriptions have a workable accuracy for the purpose of discoverability and improving search. They provide useful full texts for audio and video materials. They really could unlock vast parts of the collection. Similarly, cutting-edge text recognition of scanned documents would be terrifically important to unlocking other huge swaths of the collection as they are digitized.

Images

In the analysis of 2D images, the greatest potential for the machine learning services that I could see were in providing alternate text for images for accessibility for people with vision challenges. I also think that text recognition in 2D images and moving images is useful, providing more metadata text for full-text search.

Translation

Lastly, automatic translation of museum materials, including metadata, does seem to hold promise for making the museum and the collection more accessible to speakers of languages other than English. Perhaps we could use automatic translation tools to create versions of the museum metadata in a set of languages and give users the ability to operate in a language version of their choice. In this way, both the searches and the metadata would be in the language of the user’s choice. Perhaps as automatic translation becomes cheaper, more versions could be offered. As automatic translation improves, new language versions of the metadata could be created.

Machine learning tools appear to have their proper place in the toolkits of museum professionals. Used properly, machine learning tools can augment the skills and knowledge of museum staff, assisting them in unlocking, sharing, studying, and caring for the collections with which they work.

Featured image: The AI and Robotics Gallery in Revolution, as seen in CHM’s virtual tour.

Join us for a free webinar!

If you’re a museum professional, or simply fascinated by machine learning and want to hear more about our experience, join us on February 18 at 11 am PT. Register here.

FacebookTwitterCopy Link

The post A Museum’s Experience With AI appeared first on CHM.

]]>
Worry and Wonder: CHM Decodes AI https://computerhistory.org/blog/worry-and-wonder-chm-decodes-ai/ Tue, 12 Jan 2021 17:09:31 +0000 https://computerhistory.org/?p=19652 In January 2021, CHM will begin three months of special programming exploring the tension between fear and trust of technology through artificial intelligence.

The post Worry and Wonder: CHM Decodes AI appeared first on CHM.

]]>

We shape our tools and they in turn shape us.

— paraphrasing Marshall McLuhan

Fact and (Science) Fiction

We live in an age of escalating technological disruption. Forces seemingly beyond our control can give us wonderful benefits, or take them away, and sometimes they can do both at the same time. It is natural to be concerned about changes that affect our daily lives and our society and to ask: Am I part of the change? Does it take me into account? What are the benefits and tradeoffs? Is it worth it—for me, my community, humanity? Such questions are as old as the history of technology.

These days, as artificial intelligence (AI) masters more and more capabilities we regard as uniquely human, it may bring us face-to-face with the kind of “other” portrayed in science fiction for over a century: creations that can increasingly imitate the functions of living minds, and of living bodies (as robots). It’s natural to worry, and to hope. 

CHM decodes technology for everyone, providing objective, helpful context for people to become informed technological citizens. CHM will explore the tension between fear and hope of technology and new tech innovations through artificial intelligence. We’ll look at what AI is, how it works, and its impact on our daily lives—past, present, and future. To prepare for digging deeply into how AI intersects with issues like health, work, dating, communications, and privacy, we mined our previous work to provide basic information and resources on the fundamentals of AI. We’ll help you understand the line between fact and (science) fiction. Catch up on your knowledge and see what we have in store. If you think you already have a good grasp of AI, try our KAHOOT! Challenge and join the running to win a prize (quiz must be completed by January 26, 2021).

What exactly is AI?

AI, or artificial intelligence, refers to intelligence exhibited by machines rather than humans. It’s often used to describe when computers mimic human thinking in areas like learning and problem solving. AI is becoming more and more invisible as it operates “under the hood” in many different kinds of applications. AI determines which search results to show us when we search Google, or which videos or posts to suggest on YouTube and Facebook. Facebook and Instagram use AI to cultivate online advertising by offering suggestions for what we should buy next. Digital assistants such as Siri and Amazon Alexa use AI to interact with us. The loved and hated autocomplete feature that suggests words as you text is AI in action. Recently, contact tracing apps use AI to predict the risk of infection, helping prevent the spread of COVID-19. 

AI’s history extends back centuries to a variety of automated devices and “smart” machines that were changing jobs, finance, communication, and the home long before computers. The formal field of AI is widely considered to have started at the Dartmouth Summer Research Project on Artificial Intelligence in 1956. It gathered together scientists and mathematicians for six weeks of workshops and reflected different philosophies and research approaches. But advances have been slower than initial projections. It turns out that some key aspects of human intelligence are not so easy to reproduce in computer code. And not necessarily the ones we would expect–seemingly simple functions like walking, vision, and recognizing objects have been among the hardest for traditional AI to master.

What are they talking about?

Different terms related to AI pop up regularly in all kinds of media, and they can be confusing. Here’s a vocabulary cheat sheet with examples from both the real world and science fiction.

Artificial General Intelligence (AGI), aka Human-level Artificial Intelligence (HLAI), or “strong AI”

This is the AI you see in science fiction and it’s still hypothetical. It refers to a machine that has the ability to understand or learn any intellectual task that a human can. If faced with an unfamiliar situation, for example, the machine could quickly figure out how to respond. This is a scary idea for some people, because given how fast an AGI could access and process data, its capabilities could outstrip humans at a rapid pace. Artificial superintelligence (ASI) would far surpass the most gifted human minds across a wide range of categories and fields.

For example: the computer Hal in the movie 2001: A Space Odyssey; the Terminator in the movies by the same name; the robot Ava in the movie Ex Machina

Artificial Narrow Intelligence or “weak AI”

Artificial narrow intelligence refers to the application of AI to specific, or narrow, tasks or problems. Most of the AI in use today is narrow. Computers are programmed directly or set up to teach themselves (see “machine learning” below) how to do particular tasks or solve specific problems.

For example: self-driving cars; character recognition; facial recognition; speech recognition; ad and product recommendations; game playing (see additional examples below in “Machine Learning” and “Deep Learning and Neural Networks”)

Symbolic AI, or “Good Old Fashioned” AI (GOFAI)

Symbolic AI began in the 1950s, and was the dominant type of AI before machine learning eclipsed it in the 2010s. Symbolic AI systems are preprogrammed to follow “heuristics,” or rules of thumb, that are similar to how humans consciously think about problems. They typically operate by manipulating structures formed from lists of symbols, such as letters or names that represent ideas such as “line” or “triangle.” For example, a “triangle” would be represented by two lists of lines and angles, with the lines and angles themselves being symbols composed of their own parts, such as “point.” Many AI researchers used to believe that such symbolic systems actually modeled how human minds worked. However, once programmed, these systems can’t improve on their own. They can’t learn to get better at their tasks, nor can they learn to do anything beyond what they were programmed to. 

For example: Shakey the robot; automatic theorem provers; IBM’s Deep Blue chess-playing computer

Machine Learning (ML) and Algorithms

Machine learning is a subset of artificial intelligence that uses algorithms, or a set of rules, to “learn” to detect patterns, enabling it to predict several outcomes in order to make decisions. In order for it to learn, an ML system must be trained on lots of examples in the form of data, before it can be applied to real-world problems. In “supervised” learning, humans provide the answer key to the training data by labeling the examples, i.e. “cat” or “dog.” In “unsupervised learning,” the system finds patterns in the data on its own and groups similar things together. In “reinforcement learning,” typically used for game playing, the system learns by playing the game or trying the task thousands of times. Simpler ML systems use statistical techniques to make predictions or classify things. Machine learning systems are narrow AIs because they can only perform tasks for which they’ve been previously trained. They are only as good as the data they’re trained on, and the use of historical data sets can risk baking existing societal biases into these ML systems.

For example: machine translation; targeted advertising; assessment of risk for credit scores, insurance, etc; IBM’s Jeopardy-playing Watson computer

Deep Learning, Neural Networks, and Big Data

Deep learning is a type of machine learning where artificial neural networks, structures inspired by the human brain, learn from large amounts of data, replacing simpler statistical methods. This allows machines to solve complex problems even when they’re using a data set that is very diverse, unstructured, and interconnected. However neural networks require much larger data sets, or “big data,” than other ML techniques to use, for example, in facial recognition and emotion recognition. When companies like Google and Facebook use big data sets that include personal information to help them make predictions about people’s behavior, concerns about privacy can arise. 

For example: facial recognition; self-driving cars; deep fakes; art generation

Singularity

The Singularity is a theory that at some point in time machines will begin to improve themselves, taking off exponentially and leading to machine superintelligence unchecked by human controls, with unknown, possibly dangerous, outcomes for human beings. Futurists offer predictions about what year that may happen.

Should I be worried or hopeful?

Like other technologies throughout history, there are advantages and disadvantages and unintended consequences for AI. There is tension between the benefits and convenience of new technology powered by AI and the risks of potentially exposing personal data and endangering our own freedom and security. An immediate concern many have regarding artificial intelligence is that it might replace workers, leading to high unemployment, yet others believe AI can help us do our jobs better by reducing human error. Some are concerned about the mistakes made by AI that jeopardize safety, such as during autonomous driving, while others think it will protect us from distracted human drivers. 

Check out more CHM resources and learn about upcoming decoding AI events and online materials.

NOTE: Many thanks to Sohie Pal and other CHM teen interns for their thoughtful comments on AI that have been incorporated into this blog.

Image Caption: Boston Dynamics, Legged Squad Support System Robot prototype for DARPA, 2012. Around the size of a horse. Credit: Wikimedia Commons

FacebookTwitterCopy Link

The post Worry and Wonder: CHM Decodes AI appeared first on CHM.

]]>