Monday, November 18, 2013

Breaking News as Native Transmedia Journalism

The following column is by ATLAS Ph.D. student Kevin Moloney, whose research is focusing on Transmedia Storytelling. Moloney writes a blog called "Transmedia Journalism, Porting Transmedia Storytelling to the News Business." His blog is at http://transmediajournalism.org/

As many as twenty bullet holes riddle the entryway of the New Life church in Colorado Springs, Monday, Dec. 10, 2007, where a day earlier a gunman entered the building. Two are dead in addition to the gunman and another two are injured in the second shooting to hit a Colorado religious organization in a day. The gunman in the Colorado Springs shooting was killed by a church security guard. Two are dead in a possibly related shootings at a dormitory for missionaries in Arvada, Colo., a suburb of Denver. (Kevin Moloney for the New York Times)

Anyone who follows any news in the 21st-century mediascape has experienced this native and uncoordinated form of transmedia journalism first hand. Here in Colorado last month we suffered massive and destructive flooding. The story is still unfolding and the aftermath will endure for months more. When the news struck that local mountain streams would surpass 100-year flood levels, I, my friends and colleagues dove headlong into a diverse array of media forms and channels to digest the news. I turned on the local TV broadcasts, I listened actively to local public radio, I watched Twitter hashtags, Facebook posts, Instagram feeds, awaited SMS texts from the university and picked up the phone to talk to friends and relatives. I didn’t get the story from one place — multiple devices and technologies of all ages were used. I didn’t get it in any one media form — the story came as text, video, audio, conversation and even in the clouds outside my window. I absorbed complete stories from multiple sources and sewed them into a larger and more complex picture of what was happening than I could of had I depended on only one of them. This applies to other breaking stories, from the Navy Yard shootings to the Boston Marathon bombings to Sandy Hook Elementary. Once engaged with a story that demands fast attention, we immerse ourselves in multiple spaces in the mediascape — online and off — to gather the complete and current picture. This is not a planned and curated form of transmedia journalism. It is an emergent form created by each individual as he or she engages with the story. It illustrates the idea that we can engage with multiple characters across multiple stories in multiple places to achieve what game designer Neil Young calls “additive comprehension.” We are deeply engaged when rapidly moving events raise cultural, civil or environmental concerns, or has an immediate impact on our lives. A drive to know more, see more and stay up-to-date leads us naturally to transmedial consumption of news. But what about the stories that don’t scream for immediate attention to any and every media form and channel available? Here, as we do for traditional news stories, we depend on style, human connection and compelling narratives to draw a public. We can carry those techniques to predesigned transmedia narratives so that, once engaged, the public has somewhere to find more. Through transmedia implementation we we also open many more access points for the public to find our story.

Thursday, July 18, 2013


Using Technology to Improve Maternal  
Health in the Developing World
ATLAS Ph.D. student discusses PartoPen technology 
and issues surrounding Information and Communication 
Technology for Development (ICTD)
by Ira G. Liss

Each year, billions of dollars are spent on engineering and technology projects in underdeveloped regions around the world. Varied in approach and strategy, they usually share one overall goal – improve the lives, health and living conditions of people. To do this effectively requires careful planning and a deep understanding of conditions on the ground.

The Big Picture of Global Maternal Health
I spoke with ATLAS Institute Ph.D. student Heather Underwood, whose focus is healthcare and Information and Communication Technology for Development (ICTD). Her research paper, “The PartoPen: Using Digital Pen Software to Improve Birth Attendant Training and Maternal Outcomes in Kenya,” recently won first place in the 2013 Student Research Competition (SRC) of the Association for Computing Machinery (ACM).
Two medical students from the University of Nairobi use PartoPens to work on a partograph worksheet. One of PartoPen's pilot studies showed that its instructions and decision support made a significant impact on students' ability to correctly complete these worksheets.
Underwood designed the PartoPen, an interactive digital pen-based system, to work with an existing paper-based labor monitoring system, the partograph. Widely used around the world since the 1970s, the partograph was promoted by the World Health Organization (WHO) in 1994 when a large-scale study showed its effectiveness in improving birth outcomes in underdeveloped regions.

According to the WHO, almost 300,000 women die every year from pregnancy related complications, mostly in the developing world. Quoting Underwood’s paper, “Used correctly, the partograph provides decision support that assists in early detection of maternal and fetal complications during labor. Especially in rural clinics, early detection allows transport decisions to be made in time for a woman to reach a regional facility capable of performing emergency obstetric procedures.”

Improving Existing Technology 
Underwood embarked on her Ph.D. research with the goal of developing and applying technology that could make the existing partograph system easier and more efficient to use. By adding new technology to an older system, she hoped to improve health outcomes while maintaining the continuity of a paper system that’s been in place for decades.

PartoPen Specs
PartoPen shown with partograph paper form in the background.
The PartoPen system uses customizable software written by Underwood for the Livescribe 2GB Echo digital pen. It captures and synchronizes audio and handwritten text and digitizes handwritten notes into searchable and printable PDF documents. Pens use an infrared camera in the tip, activated when users press pen to paper. Its camera captures a pre-printed dot pattern (placed on the page by laser printer), which allows the pen to detect its location on the form. It can then interpret and use data to perform location-specific functions.

Better Understanding by Just Being There 
“There’s something indirect about introducing technology in a place where you don’t have basic needs fulfilled first,” Underwood said. “While the PartoPen is a useful tool, you need a lot of things in place before realizing its full benefits. Being at a hospital in Kenya gave me a better understanding of the whole picture.”

Before observing the maternity wards of the Kenyatta National Hospital in Kenya, Underwood studied the well-documented barriers to the partograph’s effectiveness: lack of training, complexity of the paper form and data interpretation issues.

Addressing this, Underwood’s PartoPen provided three key capabilities: user instructions, decision support and time-based reminders.

In the hospital where she conducted her research, she observed a major obstacle first-hand. “I went in focused on a very specific need and realized there are a million other needs. Understaffing is one of the biggest problems at Kenyatta National Hospital. There’s not enough nurses for the number of patients.
Underwood demonstrates the functionality of the PartoPen while a nurse midwife monitors a laboring patient and members of Kenya's media look on.

“I began to see why the paper forms were not being filled out. At Kenyatta, the nurses are well-trained and familiar with correct partograph use, but if a patient starts bleeding, everyone is focused on saving the patient. Understandably, completing paperwork is not the first priority.”

The Big Takeaway
“One of the key messages of ICTD as a field is the need to understand and work with the existing social and cultural factors when introducing a new technology,” she said. This is where partnerships come in. She now focuses on working with hospital and clinic staff to take ownership of their technology choices, that is, encourage those who work directly with maternal health issues to define their needs and shape technology to address those needs.

Underwood has found the process challenging. Hospital staffs are busy. Like most of us, they can be creatures of habit and resistant to going through the problem-solving learning curve necessary to achieve positive results.

Expanding Roles in Evolving Fields 
After spending 12 to 15 hours in labor wards on a daily basis, watching nurses work and women giving birth, Underwood gained understanding of the paper/data trail – where paper forms begin, how they are filed, what gets filled out and where the data goes.

“I began to see my work expanding into occupational workflow,” she explained, which involves examining the workplace and asking, how are things done? How can existing procedures be improved? She also found herself exploring ethnography – the study of customs, culture and cultural phenomena.

In addition, she’s come to realize that having a more formal healthcare background in these settings would be valuable. So she is now working towards a master’s degree in public health in conjunction with her ATLAS Ph.D.

“I never thought I’d be writing software for nurses in the labor wards of Kenya and studying public health. It’s this interdisciplinary aspect of ATLAS that allows me the flexibility and support to design and evolve my areas of study and address real needs as they come up.”

~  ~  ~  ~
ATLAS Institute offers a two-year Master of Science in Information and Communication Technology for Development (MS-ICTD) that includes three semesters in residence and a one-semester practicum – a hands-on internship with an organization engaged in ICTD efforts. To learn more, contact Ruscha Cohen, graduate program adviser at Ruscha.Cohen@colorado.edu or visit http://www.colorado.edu/atlas/newatlas/masters.

Learn more about the top graduate student award that Heather Underwood won in the ACM Student Research Competition Grand Finals, http://www.colorado.edu/atlas/research/underwood.html

Learn more about Heather Underwood’s research, http://www.partopen.com/index.html 

The writer/interviewer Ira G. Liss is assistant communications director at ATLAS Institute, University of Colorado Boulder. He is also a musician, songwriter and performing artist. See his video artwork here, www.youtube.com/theIraLissShow.

Wednesday, June 26, 2013

An Interview with James McCartney, the developer of SuperCollider, a programming language for real-time sound, music and algorithmic synthesis
By Ira G. Liss

SuperCollider is a programming language for real time audio production and composition. It is free, open source software popular among artists, musicians and developers. It can be found in various college electronic music classes. An enthusiastic worldwide community produces work with it and contributes to its development.

During the week of May 20, the 2013 SuperCollider Symposium was hosted in the Roser ATLAS building on the CU-Boulder campus. Evening concerts were performed by a variety of artists and developers from around the world. Discussions covering various aspects of development, invention and creativity were presented during the day.

I had the opportunity and privilege to interview one of the attendees, the creator and developer of SuperCollider, James McCartney. He first released the software in 1996. Over the years, it has undergone several upgrades. Currently, he is a senior engineer at Apple based in Cupertino, California and works in the CoreAudio group at Apple, responsible for operating system level audio services for Mac OSX and iOS.

To provide a more direct experience of the software, I asked two CU artist/composers to comment on it:

“For me, SuperCollider is the fastest and most expressive way to achieve the music I create. Its openness and flexibility provide the artist with the ability to create anything they could imagine: from traditional composed works to endless algorithmic structures; from dance music to experimental noise. Although there are various alternatives out there that can achieve the same basic result, I feel that the power of a programming language, rather than a graphical environment, allows for a higher degree of artistic control, despite the initial learning curve.” – Cole Ingraham, DMA graduate, College of Music, CU-Boulder and lead organizer/producer of the 2013 SuperCollider Symposium


Screen shot of the SuperCollider interface – The code (visible on left half of screen) was written for a performance of the Boulder Laptop Orchestra (BLOrk) by CU music faculty John Drumheller. The middle grey window is the graphical user interface (GUI) generated by the code that gives each BLOrk musician control over the sound coming from their workstation during a concert. See other screenshots: http://www.flickr.com/groups/supercollider/pool/tags/screenshot/.


“To me, SuperCollider is a toolkit for building environments for sound art creation. Because of the vastness of possibilities with this toolkit I see it as a blank canvass with a large set of brushes and pigments, or a large pile of scrap metal with state of the art welding torches. SuperCollider has allowed me to explore sound in a way that was not available before and has opened new paths and ways of thinking about sound, music and image.” – John Drumheller, instructor and director of music technology, College of Music, CU-Boulder

The following interview was conducted on May 23, 2013.

Ira G. Liss: How did you begin making electronic music?

James McCartney: I had an interest in computer science, but the reason for considering computer science was because I was interested in electronic music. I’ve always done audio.

I was using C-sound and stuff like that. First I was using analog synthesizers, then I learned C-sound. I didn’t really like it that much so I started writing my own tools I guess from that point.

James McCartney
So there were several different iterations. I wrote something else, Synth-O-Matic, then I wrote SuperCollider Version 1 and then Version 2 which sort of established the language the way it is now.

And then, I wanted to change the architecture of it, so I wrote SuperCollider Version 3. That’s what people are using now.

The idea was just to make a language that made it possible to create sounds – to write code that would make whole classes of sound or whole ranges of sonic possibilities from not-so-much code. That was the idea basically.

IGL: Did you have a conscious desire to free yourself from the sounds of traditional, acoustic instruments?

JM: I think that’s been the goal of electronic or computer music; maybe not the goal, but certainly the freedom that it offers – the ability to make any sound possible.
The loudspeaker is the most flexible sonic instrument.

IGL: It is amazing that the vibrating diaphragm can create almost anything.

JM: And then if you have a computer that can create any waveform going into something that can produce any sound – theoretically, there’s no limit to what sound you can make.

All electronic and computer music is in that realm – trying to exploit that realm of possibility, or has that potential to exploit it.

IGL: Could you talk about what you were striving for in creating SuperCollider?

JM: There’s a range of techniques that people have discovered, from all the analog stuff – oscillators and filters – all the tools that have been used. The idea was to bring it all into this software that made it easy to configure it all dynamically, so that while it’s running, it can be creating these sound structures that can be unique to the moment.

IGL: Kind of analogous or similar to traditional musical instruments where you make changes and get instant feedback or instant gratification; you’re not waiting months for the code to run somewhere on a mainframe.

JM: There’s a story about that for me. I was working on some music for modern dance and had a program that I was running on my computer and it wasn’t real time. At the time, computers were not fast enough. While I was working on this piece (and it was taking me forever because it was taking so long to calculate the audio), the PowerPC chip came out which had floating point; it was the first RISC micro-processer that had really fast floating point.

Apple was making one using this IBM chip; I went on the day that it was available and I bought it. I recompiled my program and it ran 32 times faster which was faster than real time. So I went from going maybe a third of real-time to faster than real-time literally overnight.

I finished that piece and then decided I’m going to rewrite all my code to work in real time – with the idea that it can work in real time. Before that, I’d written it where you set it all up and then it will make the sound for you in the file.

I changed it so that it would make the sound directly off to the speaker in real time; that was the beginning of making a tool that could do live music.

(Editor’s note: Apple made the PowerPC chip available in their Mac computers in 1994.)

IGL: It sounds like you were given wings or went from a horse-drawn carriage to a V8 engine.

JM: That was a good event. It took me some years more to make it where it was really easier to use. There were a lot of limitations in the first version to be overcome. That’s been a constant process, even now, thinking about how I could do this, and what would make it even more flexible or better, more intuitive to use.

IGL: When that moment happened, where you realized it was faster than real time, was it a celebration kind of moment? Did you tell your colleagues, wow, this chip is fabulous! Or did you just take it in stride?

JM: I wasn’t really communicative about it. I just thought, Great! I can get my piece done in time, was the main thing at that moment. It made me excited to be working on it.  I wasn’t really shouting about it.

IGL: Did you come to computer-generated sound with a traditional music background?

JM: No, I listened to music when I was a kid and I heard Walter Carlos (Editor’s note: Name changed to Wendy Carlos, co-creator of the album “Switched On Bach.”) and thought, oh, that’s kind of interesting and I heard German electronic groups in the late 70s and I thought, oh, this is something that I’m really interested in. I wasn’t that much interested in music until I heard electronic music. I do like a lot of music but for myself, I’m not interested in acoustic music for what I want to do.

IGL: That’s very clear-cut then. For each of us, our preferences are pretty much things we’re born with. We don’t really choose them. So it makes sense then that electronic music is really where your heart led you to listen and make tools.

JM: I was very mathematical. Walter (Wendy) Carlos had a box set of LPs that told how a synthesizer worked. It had a bunch of examples of “this is frequency modulation.” It was pretty easily accessible for me and I understood it very quickly and I liked the sound.

IGL: Did you understand it easily partly because mathematics is your medium as well, a part of your comfort zone?

JM:  I’m not really a mathematician. I never studied math at the higher levels. It was always just very intuitive to me. The plasticity of electronic sound – the fact that you can make it sound like anything or that it had a really wide range – that’s what attracted me to it.

IGL: Do you have ideas or visions of where electronic music is going?

JM: No. (laughter)

IGL: Well, we don’t have crystal balls.

JM: I was in Austin Texas. There was very active antagonism towards electronic music there. I didn’t anticipate that electronica would take off which is oriented around dance music. Now, if you say electronic music to someone they think dance music, but I don’t mean that.

IGL: Could you say what electronic music is for you?

JM: For me, it’s just exploiting the possibility of sound – having any sound available  to compose with. That’s the main thing. Sound is a pretty psychoactive medium.

IGL: Sometimes I see sound as being sculptural and almost literally sculpture. Was it just last night, the piece that was played in darkness? (Editor’s note: There were several concerts during the week of the SuperCollider Symposium 2013. The piece being referred to was an 8-channel fixed-media piece “Lucent Voids” by London-based artist Erik Nyström.) It was so unpredictable! There was no knowing from moment to moment where it would go and what texture it might have. For me, I was seeing a space that crackled or fell or tumbled, so the artist, through sound, was creating a 3D environmental physical experience, like sculpture or architecture.

JM: I think that happens for me a lot more in these multi-channel pieces that are really enveloping. You feel like you’re in this environment. I tend to have more visual associations with the sound in that context, rather than the stereophonic pieces. I don’t get that visual association with stereophonic pieces; they don’t quite immerse me in it.

IGL: For me, it’s easier to experience in the Black Box (the theater space of ATLAS Institute) where the speakers surround the audience, that is, when the composer, artist and/or sound designer has anticipated the near and far of the speaker setup.

How do you feel about a symposium or gathering of people to talk about a work of software that you personally created? Is that a very satisfying experience?

JM: Yeah, of course I like it. (laughter) I’m glad people are finding it interesting enough to have a symposium on it. That’s good.

IGL: Are there pieces that you are working on or that you might be performing that you can speak about?

JM: Well, no. Once I’ve done more development, it’s been a lot of thinking about the tool. I have ideas of how to change the tool in order to do it the way I want to do it. There’s been a lot of that kind of thing. I’m still looking for more flexible ways of composing sounds.  

There’s all this live stuff happening where you create a whole bunch of stuff and  destroy it in the piece. I’m interested in saving that history while you’re making it so you can go back at any point and get what you had and save that and maybe use it in the future again. It’s like keeping a history of what you did while you were coding and being able to go back to it.

IGL: I could see how that would be useful.

JM: And also making the code into a graph that can be manipulated more semantically than just as text.

IGL: When you say graph, you’re speaking of a graphical interface where users might push a line or cursor to make changes?

JM: Right, it’s still a programming language; it’s still code but the code is represented as a structure you can manipulate rather than just as text.

IGL: What language or languages were the precursors to what SuperCollider is now?

JM: SuperCollider is based a lot on a language called Smalltalk. It’s object-oriented. Smalltalk has some features of Lisp. I’ve added more features of Lisp and just a few features from other languages as well.

IGL: So you’ve been creative in finding existing code and language to help build SuperCollider?

JM: Well, the semantics. It’s mostly Smalltalk but also some from other languages in order to have a theory of operation. I have a solid foundation for the language basically.

IGL: Do you have any advice for musicians, composers or developers who might be in the beginnings of their studies or who might like to create something new?

JM: I don’t really know. I sort of followed a path and I’m not sure I have a good idea of what a good way to go is. Do you mean for creating another tool?

IGL: Let’s rephrase the question. If someone is looking to create a tool, and I imagine it’s not an overnight process. . .

JM: I think the thing is, in order to create a tool, you have to experience some existing construct of how you’re doing something and then have an idea of what you find is difficult for you with it. You also need to have skill to implement something to get around that difficulty.

IGL: There’s a problem/solution aspect to the process. You began it by working in software that you didn’t find easy or satisfactory.

JM: Right.

IGL: And so you invented something easier or better. It’s an ongoing journey.

JM: Right. At first it was someone else’s software that I found difficult. Now it’s my own software that I’m finding difficult and needing to route around. (laughter)

IGL: Do you see your software in a certain way in the future? Let’s say anticipating ahead in five or ten years?

JM: This particular construction of it, I think, probably won’t be used in five years.

IGL: Oh, it will be something else; that’s how rapidly it’s changing.

JM: I don’t know. It’s hard to say. Things have a certain lifespan until the structure becomes unwieldy and then a new thing replaces it. This always happens. I mean I might be the one to replace it with something else or someone else might be the one to replace it. But eventually, there are better ways to do things. So, things get replaced.

IGL: Well, that’s sort of the design of things, of the universe, evolution and biology.

JM: Right.

IGL: So when you imagine your software being replaced, does that thought make you sad?

JM: No, it doesn’t.

IGL: That seems pretty healthy. (laughter)

JM: Because I’m already working on something new, I’ve already abandoned it myself.

IGL: No one can reject it because you’ve already rejected it. You’re ahead of the curve.

JM: (laughter) Yes.

~  ~  ~  ~
related links:
- Ira G. Liss, the writer/interviewer, is assistant communications director at ATLAS Institute, University of Colorado Boulder. He is also a musician, songwriter and performing artist. See his video artwork here, www.youtube.com/theIraLissShow.

- Learn about upcoming, events at ATLAS here, http://www.colorado.edu/atlas/newatlas/events.

- Read more details about the SuperCollider Symposium 2013, http://supercollider2013.com/

- See photos of the SuperCollider Sypmposium 2013:  http://www.flickr.com/photos/cu_atlas/sets/72157633626063223/

- Stay informed about upcoming performing arts, talks and special events at ATLAS. Get on the email list by emailing cuatlas@colorado.edu with “email list” in the subject line.

- Download SuperCollider, http://supercollider.sourceforge.net/

Tuesday, May 21, 2013

Computer Games as Gateway to STEM
The Story of “Game Goddess” and ATLAS Ph.D. student Kara Behnke
By Ira G. Liss

Kara Behnke could be on track to become the unofficial “Game Goddess” of the world. Having a passion for computer games – playing them, designing them, teaching others to design them and using them as teaching tools – she is working towards her Ph.D. in Technology, Media and Society at CU-Boulder’s ATLAS Institute.

A Gateway to Science
Behnke believes computer games make excellent gateways to science, technology, engineering and mathematics (STEM) for K-12 students who might not otherwise be attracted to these subjects. “Games are inherently fun. Young people love to play them. So why not use this basic, human desire for fun (an example of positive psychology) to motivate students in the classroom?”

National Science Foundation Programs
As part of her research, Behnke has spent 10 to 15 hours per week in a Longmont high school – part of a two-year program sponsored by the National Science Foundation’s (NSF’s) eCSite program (pronounced “excite”), an acronym standing for “Engaging Computer Science in Traditional Education.” (The NSF is also the sponsor of her ATLAS Ph.D. fellowship.)

Diversity Issues
For decades, the computer science (CS) field – like other science and engineering fields – has had extremely low diversity or representation of ethnicities and gender, particularly women. And yet, to be competitive in a global marketplace, innovation and entrepreneurship depend upon the insights and experiences of all of us – our full range of culture, gender and ethnicity – that together make a healthy society.

To help bring diversity to these fields, the NSF eCSite and GK-12 programs explore ways to bring computer science and computational thinking to existing curricula across multi-disciplines –  including biology, health, art and music. These programs offer the expertise and passion of CS students to K-12 teachers.

Integrating Science in “Non-Science” Disciplines
In these programs, computer science and STEM content are integrated into disciplines where students are already present and motivated. The idea is that these students, given a positive experience with computational thinking, are then more likely to pursue STEM subjects in depth.
In real time, the Microsoft Kinect system scans the movements of  students while software programmed by a student team places their silhouettes in an animated graphic background.

Behnke explained, “This year, I helped an art teacher bring computer science to students at Skyline High School in Longmont. It was a very gratifying experience!”

A Game Interface for Performing Arts
Each spring, Skyline presents a showcase of student arts that include studio arts, 2D and 3D design and dance. Behnke invited students in the afterschool computer science club to collaborate with art students and participate in this art event. She showed students a brief demo of what could be done with the Microsoft Kinect system (a user interface commonly used for games run on the popular Xbox 360) and invited them to “create something cool.” 
Interactive installation designed by students under Behnke’s direction allows participants to see their image instantly transformed through the creative use of the Microsoft Kinect game controller.
"We had 20 to 30 students working on this project. For many CS students, it was the first time they worked on an art project. And for art students, it was the first time they worked with computer technology to make art.”

Kinect software captured the movements of dance students in real time. Their images were scanned and projected on to screens with colorful animated graphics integrated into the imagery. Graphic programs were designed by a team of art and computer science students. Installations were designed and set up in rich environments that invited audience participation.

Multi-Media, Multi-Disciplinary, Multi-Successful
“Their art installations and performances incorporated costume, set, environmental and graphic design along with dance, visual arts, animation and computer science – all produced by  students.” Behnke continued, 
Skyline High School student presents a set design he created in which the multimedia, interactive display and performance will be presented.

“They worked successfully in teams while using technology they had never worked with before. They saw how technology can be a wonderful tool – as creative and expressive as clay or paint. And they experienced how multiple disciplines and media can work together in exciting ways.”

An Untraditional Background
 “I came to computer science from an untraditional background.” Behnke explained. “I knew I wanted to work in the computer game industry when I came to CU. But there was little support for this when I was an undergraduate student. Computer science classes were for engineering students and I was a liberal arts student studying Japanese. The closest program I could find to what I wanted was the ATLAS Technology,Arts and Media (TAM) program (an undergraduate minor and certificate program).

“In TAM, I took an elective taught by ATLAS director John Bennett – Virtual Worlds in Second Life. In the first group to take this class, I was introduced to computer science by working with code, building virtual objects and designing programs – all while playing in Second Life, a 3D virtual world. That experience made a big impression on me.”

Independent Study Becomes Model for New Course
Behnke went on to do an independent study with Bennett designing games for the Xbox 360 using the C# programming language. Their work together became the model for a new course offering, ATLS 4519/5519, Computer Game Development.

“I became a teaching assistant for that new class. It was really satisfying to know I was instrumental – in collaboration with John Bennett – in the creation of that class. It helped affirm my view that designing and using computer games can help students of all ages (myself included!) to learn computer science plus related fields – animation, graphic design, storytelling, coding, user interface and more.”

Her Own Proof of Concept
Behnke may be her own living proof of what she believes, researches and teaches – computer games can lead to deeper learning in the applied sciences. They can bring a diversity of students to fields that currently have very limited diversity. 

Using our natural, inherent desire for fun and play, computer games can help young people discover science, mathematics and engineering – and collaborate in the ongoing creation and evolution of new, constructive technologies in a complex, ever-changing world.

~  ~  ~  ~
The writer, Ira G. Liss, is assistant communications director at ATLAS Institute and a performing artist. See video of his original commentary, songs and spoken word here.





 

Monday, May 6, 2013

Exploring 3D Tactile Technologies for Visually Impaired Children

A Unique Challenge – Helping Families Create Tactile Books for Visually Impaired Children
ATLAS MS-ICTD student Abigale Stangl and computer science team work on human-centered design strategies and tools

Abigale Stangl has taken on a unique challenge. How can parents with visually impaired children create custom-tailored storybooks and learning tools for their loved ones using 3D printing technologies? How can this be done easily and affordably by parents with varying levels of computer proficiency? What kinds of guidelines and best practices will serve families dealing with these issues?

Photo at left: ATLAS MS-ICTD student Abigale Stangl and a computer science team led by assistant professor Tom Yeh work on human-centered computer (HCC) solutions to help parents of the visually impaired. She sits in the lab where computer science graduate student Jeeeun Kim fabricated the tactile book prototypes shown below.

CU’s ATLAS Institute ICTD Practicum
Stangl is conducting this research as part of her practicum semester in the last portion of her academic program at CU’s ATLAS Institute Master of Science in Information and Communication Technology for Development (MS-ICTD). During the practicum, ICTD students turn classroom theory into on-site, real-world practice.  

They team up with private or public organizations, companies and NGOs to work on solutions to various quality-of-life issues in communities located around the world.  Stangl teamed up with Professor Tom Yeh and fellow graduate student Jeeeun Kim in the Sikuli Lab of CU’s Department of Computer Science.

A Surprising Path
She came to this project from a surprising path: environmental design and landscape architecture. Quite a leap, one might say. Perhaps.

However, the challenges she took on in her previous field called on her to design and problem-solve across multiple disciplines – urban design, horticulture, community relations, etc. Her proven ability to work comfortably with a diverse set of specialties continues to serve her today in the field of ICTD.

The People Part
There are multiple dimensions to this project. One of them is people, of course – parents, children, teachers, developmental and behavioral psychologists and computer scientists.

“As a cultural component of the work,” she explained, “I regularly visit Denver’s Anchor Center (a preschool for visually impaired children) to observe the ways small children play and learn. They range in age from several months old to five years old. I learn a lot from seeing how teachers and parents interact with the children.”

From this steady observation, Stangl gains insight into what sorts of learning tools might best serve children and parents. One of her insights appears to be a universal truth about learning.

Stories are Fundamental
“We learn from stories and story telling. Stories are fundamental. Just as parents of children (with fully functioning senses including sight) enjoy the intimacy and connection of reading to their children with picture books and storybooks, parents of visually impaired children have the same desire to participate in their children’s lives.”

2D to 3D Software Interface
Screen layout at left conceptualizes what a 2D to 3D software package could look like. As its subject matter, Stengl features the classic children’s book “Good Night Moon.” The goal is for parents and teachers to be able to create 3D objects from existing 2D children's books that visually impaired children will then be able to explore, play with and learn from.

Stangl continued, “For the visually impaired, tactile development is vital. Before children can learn to read (with Braille), their sense of touch needs to be practiced,  strengthened and heightened. This comes through the exercise and exploration that tactile books provide. Of course, it’s not only about touch. It’s about the understanding and ‘seeing’ of a greater world with all its complexities and context that is made available through the window of touch.”

The Technology Part
As mentioned, the technology portion of the work is being developed by a team of colleagues in the Sikuli Lab of CU’s Department of Computer Science. Graduate students collaborating in the work include Jeeeun Kim who contributes to the research by experimenting with software, hardware and a variety of materials to fabricate tactile prototypes. (See below.)

Photo at right: Five textured layouts were  fabricated by computer science graduate student Jeeeun Kim. Each one could potentially be used as materials for tactile storybooks. Clockwise from left: Plastic yellow layout shows a tactile illustration of a room interior, formed by a 3D printer; three executions of a diagram of an egg uses three materials: 1. Cut and folded blue paper; 2. Multi-layered laser-cut wood; 3. Etched red plastic; sitting to the right of a pen, a transparent plastic sheet becomes a base for raised forms that illustrate a room interior, made from a glue gun.

Several faculty members serve as principal investigators and senior advisors. All contribute to moving the research forward. One of Stangl’s computer science professors, Tom Yeh, proposed the project and ignited her curiosity about tactile perception and interface design for children with visual impairments.

User-Centered Design
“I took a class with Dr. Yeh in human computer interaction (HCI) and human centered computing. What I learned in class beautifully fit my worldview and my goals – make sure projects are user-centered. The work must begin and end with the needs of the user," Stengl continued.

Photo at left: At $2,000, this MakerBot 3D printer represents a price breakthrough that could make it possible for parents of visually impaired children to print their own tactile learning aids at home.

“Given my design background, I’ve seen how both environmental and social information must be synthesized into solutions that are sensitive to the needs of whoever you’re designing for – in this case – teachers, parents and their children.”

~  ~  ~  ~
Abigale Stangl and six other ATLAS MS-ICTD students presented the work they completed during their practicum semester on April 10th. Videos of their presentations will be available online.

~  ~  ~  ~
Link to ATLAS Institute’s MS-ICTD program:
~  ~  ~  ~

The writer of this article, Ira Liss, is ATLAS Institute's assistant director of communications and also a pianist, singer/songwriter and performing artist. See videos of his original work. Contact him
here.



Wednesday, May 1, 2013

Application Developers Alliance and the University of Colorado Boulder’s ATLAS Institute Team Up To Advance App Innovation Research

Press Release
Washington, D.C. (May 1, 2013) - The Application Developers Alliance has established the Alliance Research Fellowship at the University of Colorado Boulder’s ATLAS Institute. The goal of the fellowship is to further understanding of the dynamics at play in the app development ecosystem. The ATLAS Institute has appointed Ph.D. student Sid Saleh the inaugural Application Developers Alliance Research Fellow.

“The Alliance is thrilled to partner with the ATLAS Institute to provide first-of-its kind insight into the developer community and innovation in the apps industry,” said Jon Potter, President of the Application Developers Alliance. “This allows us to tailor our programs and advocacy efforts to meet the needs of developers. Furthermore, our ground-breaking research will give policymakers insight into the work developers are doing, their contributions to our economy, and the challenges they face.”

Saleh has launched two research inquiries. The first is an ongoing poll of application developers, aiming to accurately describe the developer community. The survey includes the number and different types of developers, the kind of work they are doing, the specific challenges they face, and how these are evolving over time. The second study was launched in conjunction with the Alliance’s Developer Patent Summits and looks into developers’ experiences with the software patent system.

“We are honored to have the Application Developers Alliance associated with the work of ATLAS through support of our Technology, Media and Society Ph.D. program,” said John Bennett, Director of the ATLAS Institute and Archuleta Professor of Computer Science at the University of Colorado Boulder. “The ATLAS Institute’s focus on interdisciplinary, technology-related research makes us particularly qualified to partner with the Alliance in exploring a wide variety of issues that matter to the developer community. Our scholarship incorporates real-world data and practical challenges. We expect the research the Alliance Fellow conducts will provide unique insight into an industry that has such an impact on global digital society.”

About the Application Developers Alliance
The Application Developers Alliance is an industry association dedicated to meeting the unique needs of application developers as creators, innovators, and entrepreneurs. Alliance members include more than 20,000 individual application developers and more than 100 companies, investors, and stakeholders in the apps ecosystem.

About the ATLAS Institute, University of Colorado Boulder A campus-wide initiative, the ATLAS Institute leads discovery and innovation at the intersection of technology and society. We seek to better understand the interaction of people with information and communication technology (ICT), and to realize the full potential of that interaction. ATLAS interdisciplinary programs help develop creative designers, critical thinkers, effective leaders, capable learners, transdisciplinary innovators and engaged global citizens. Contact: Courtney Lamie, Application Developers Alliance 202-250-3006 (desk) Bruce Henderson, ATLAS Institute 303-735-0899 (desk)

Wednesday, April 24, 2013

International multimedia artist Nicholas Jaar speaks at ATLAS at 6 p.m. this Friday, April 26, as part of two free ATLAS events

1. Nicholas Jaar – The ATLAS Speaker Series and the Communikey Festival of Electronic Arts present Nicholas Jaar, an international multimedia performing artist, who will talk about his digital music work and creative process.
6 p.m. this Friday, April 26, ATLAS 100, Cofrin Auditorium, first floor of the Roser ATLAS Building

2. Masaki Batoh – The ATLAS Center for Media, Arts and Performance and the Communikey Festival of Electronic Arts present Masaki Batoh and his performance of Brain Pulse Music, which features music based on brain waves.
7:45 p.m. this Friday, April 26, ATLAS Black Box theater, downstairs, lowest basement level B2, Roser ATLAS Building

Nicholas Jaar is founder of the record label Clown & Sunset and released his debut album "Space is Only Noise" in 2011. Record label colleagues plus emerging Colorado-based music producers will join him for a Q&A discussion.
Jaar has performed around the world, including an acclaimed five-hour improvisation at NYC’s Museum of Modern Art that he will perform again during the Communikey Festival.

Masaki Batoh's Brain Pulse Music is the result of research into the bioelectric functions of the human brain combined with the traumatic aftermath of Japan’s Great East Earthquake. When first conceived, it was to realize music from extracted brain waves.

Both events are free and open to the public.
Audiences are advised to arrive 10 minutes early.
Seating is limited and first-come, first-served.

The ATLAS Speaker Series is made possible by a generous donation by Idit Harel Caperton and Anat Harel.

Tuesday, March 19, 2013

Live and in Concert:
New Sound Controller Technologies
Enhance and Create Music

Creating and Controlling Sound
Following its mission of applying technology to a wide range of disciplines - including the performing arts - CU's ATLAS Institute recently presented performances that explored the use of controller technologies to produce sound and music.

On March 2, 2013, computer science Ph.D. student Charles Dietrich collaborated with the Boulder Laptop Orchestra (BLOrk), the ensemble-in-residence of the ATLAS Center for Media, Arts and Performance, in the creation of a piece called “Gestures.” In this piece, three music students moved hands and fingers in front of 3D cameras that were able to recognize the three-dimensional positions of the center of the hand plus fingers.

Three students control sound through the movement of their hands which are tracked by 3-D Intel cameras. Images projected on the screen behind them show how the camera's software is able to diagram the movement of their hands by tracking the center of their palms and positions of their fingers.

A Gesture-Based Music Controller
"Our hands are so expressive," Dietrich said. "A technology that can track and record their subtle movements can also help us develop new applications – in music, sculpture, visual arts and CAD programs. Those who study the body language of various cultures may have new tools for research as well."

The hands of doctoral student Charles Dietrich are read by an Intel 3D camera mounted to the top edge of his laptop. Inset image shows how his hands are interpreted by the software. Brightly colored lines are superimposed over his palm and fingers and change angle and shape as he moves. In this way, the software can detect whether hands are open or closed or shaped in a way that might trigger a change in the sound specified by the programmer/developer/musician.

The camera incorporated a depth camera and a standard color camera. The depth camera used infrared light and an infrared detector to determine depth by time-of-flight (ToF) of the light, similar to radar, but with light. The depth camera and color camera data were processed through Intel software to determine the hand landmarks.

Dietrich used data from the Intel-developed camera software to assign sounds to hand movements. For example, the altitude of the left hand determined note or pitch. Opening or closing the right hand determined whether a note was played. The altitude of the right hand controlled loudness. In this way, Dietrich’s gesture-based controller functioned as a musical instrument.

13 Pedals, No Hands
Another controller used at the same concert was developed and played by College of Music Ph.D. student Hunter Ewen. Dubbed the “SCREAMboard” (an acronym of “soloist-controlled, real-time, electronic audio-manipulation board”), the foot-operated musical interface was used for real-time composition and improvisation.

Doctoral student Hunter Ewen presents his foot-controlled controller that allows hands-free control of 13 sound parameters. “My goal is to have the controller allow electro-acoustic musicians a hands-free way to control all the special effects and sound features they could want. Achieving this with earlier tools has been a difficult, awkward and often expensive process,” Ewin said.
Thirteen pedals controlled recording, playing, looping (after being recorded, a sound or sound sequence is played back repeatedly as a “loop”), synchronizing loop lengths (managing the way that multiple loops fit together or align with one another), panning (controlling the way sound is thrown around a room given its arrangement of speakers) and volume.

In one piece, Ewen (shown at left and below at right) used his voice, a kazoo and several unconventional instruments to record and loop a variety of sounds that he layered into complex textures.
The colorful diagram of vertical bars projected on the screen behind the performers shows the status of the 13 different controller pedals.

In another piece, John Gunther (shown at left), associate professor of jazz studies and co-director of BLOrk, performed a solo piece using Ewen’s interface. He played his jazz flute, recording and looping melodic sequences that harmonized with one another to become a rich background upon which he improvised a solo.

From One Mouse to Many
John Drumheller, a CU College of Music instructor, director of music technology and BLOrk co-director, talked about controllers. “In the past we were limited to one slider or one controller when we used a computer and a mouse. We could control only one aspect of sound with that mouse. But now, with the introduction of the tablet, smart phones and touch screens, each finger can control a completely different sound or quality of sound.”

BLOrk co-director John Drumheller performs on a tablet during a December 2012 concert in the ATLAS Black Box theater.

It is now possible for our 10 fingers to control 10 sounds or aspects of sound (pitch, volume, rhythms, harmonies, etc.) simultaneously. Of course for centuries, the piano has allowed our 10 fingers to make 10 different sounds at the same time, but we are talking about the ability to assign a different sound, instrument, rhythm or dynamic to each finger. (This is similar to Hunter Ewen allowing us to assign and control 13 different sounds or parameters using his foot pedal control device.)

Four Controllers, One Touch Screen
In the music lab, Drumheller showed me his tablet for a moment. He set its screen up with four rectangles of different colors. As I touched the screen, I let my fingers slide around the surface, tap and wiggle. Instantly (much to my delight!), four different exuberant sounds were generated simultaneously as my fingers moved.

Soon, as Dietrich’s gesture-based technology progresses, we will be able to move hands and fingers in the air around us, unconfined by screen, tablet or keyboard. Our movements will instantly become sound. Dance and music, always complementary art forms, may become one.

~ ~ ~ ~
See additional photos of the March 2nd concert of the Boulder
Laptop Orchestra (BLOrk) that took place in the downstairs
ATLAS Black Box theater.

Get yourself on the ATLAS email list. ATLAS concerts, dance/theater performances, art installations and Speaker Series talks are always free and open to the public (well, with very few, rare exceptions).

Look into a list of upcoming ATLAS events.
Perhaps there are several you'd like to attend.


Contact the writer: Ira Liss is ATLAS Institute's assistant director
of communications and also a pianist, singer/songwriter and performing artist. Enjoy videos of his original work.

Monday, February 4, 2013

The ATLAS Speaker Series, which is made possible by a generous donation by Idit Harel Caperton and Anat Harel, hosts distinguished visitors from academia, industry and the arts as part of the ATLAS Institute’s mission to explore information and communication technologies and their effect on society.
The series is an educational and experiential resource for students, faculty and the larger community to discuss the challenges, opportunities and innovative applications of technology. Talks usually run from 4-5 p.m. in the Cofrin Auditorium, ground floor, ATLAS 100 (enter from lobby), unless otherwise noted.
The following speakers have been scheduled for the spring 2013 semester:

Digital Media and Music as an Instrument for Social Change

DJ Spooky Paul Miller (aka DJ Spooky) will discuss his recent work exploring digital media, music and the ways that art can open minds and help people gain new perspectives on issues like climate change. He will perform his multimedia electronic compositions (using his own DJ app and wall to wall graphics) accompanied by CU College of Music cellist Megan Knapp and violinist Emily Lenck. Miller is a multimedia digital artist, musician, composer, remixer, author of “The Book of Ice” and the first artist-in-residence at NYC’s Metropolitan Museum of Art. His visit is a collaboration between ATLAS Institute, the Program for Writing and Rhetoric and the President's Fund for the Humanities. 6-7:30 p.m. Wednesday, Feb. 6, downstairs Black Box theater,
lowest basement level, B2


How Open Source Software is Changing Technology

Stormy Peters Stormy Peters will look at how we can predict the future of technology by observing the use and development of such software. As examples, she’ll show how nonprofits and open source software projects set technology directions and make bold statements about how to make the world a better place. She will also discuss her work at Mozilla and share insights about the future of information technology (IT).

Stormy Peters is director of websites & developer engagement at Mozilla. She is an advocate and supporter of open source software and its potential to change the software industry. She is also the founder and vice president of Kids on Computers, a nonprofit organization that sets up computer labs in developing countries. 4-5 p.m. Monday, Feb. 18, Cofrin Auditorium, ATLAS 100

Women and the Web: Bridging the Gender Internet Gap in Developing Countries

Renee Wittemyer Renee Wittemyer will discuss the data on the large Internet gender gap in developing countries and the social and economic benefits of securing Internet access for women. Recent studies find that on average – across the developing world – nearly 25 percent fewer women than men have access to the Internet and the gap soars to nearly 45 percent in sub-Saharan African regions. Wittemyer will discuss this research and a call-to-action to double the number of women and girls online in developing countries from 600 million today to 1.2 billion in 3 years.

Wittemyer, director of social impact in Intel Corporation’s Corporate Responsibility Office, develops strategies for Intel’s girls and women’s campaign and manages relationships with strategic alliances, such as USAID, NGOs, and U.N. Women.
4-5 p.m. Monday, Feb. 25, Cofrin Auditorium, ATLAS 100

Education at the Intersection of Computer Science and Music

Ge Wang Ge Wang will discuss the transformative possibilities of music and computing to make art, strange new instruments and connections to people around the world. His talk will explore laptop and mobile phone orchestras, computer music languages, social music apps like Ocarina and Magic Piano – all examples of an emerging, growing space where computers, music and people interact. Wang is a Stanford University assistant professor in the Center for Computer Research in Music and Acoustics (CCRMA) and researches programming languages and interactive software systems for computer-generated music. 4-5 p.m. Monday, March 18, downstairs Black Box theater, lowest basement level, B2

Learning to Love Technology by Making Arts and Crafts

Leah Buechley Leah Buechley will discuss her work developing accessible technologies that allow people of all ages – including those who might not otherwise be attracted to computing – to sketch, design, paint and sew while incorporating electronic technology. As examples, she will show student projects that blend computing with traditional arts, crafts, textiles, paper and wood. A developer of the LilyPad Arduino toolkit, Buechley received her master’s and Ph.D. degrees in computer science from CU. She is an associate professor at MIT and directs the High-Low Tech Media Lab. 4:15-5:15 p.m. Wednesday, April 10, Cofrin Auditorium, ATLAS 100

Mystery, Music and Digital Innovation

Nicolas Jaar Nicolas Jaar, an international multi-media performing artist, will talk about his digital music work and creative process. Record label colleagues plus emerging Colorado-based music producers will join him in a Q&A discussion. Jaar is founder of the record label Clown & Sunset and released his debut album “Space Is Only Noise” in 2011. He has performed around the world including an acclaimed five-hour improvisation at NYC’s Museum of Modern Art. The talk is a collaboration between ATLAS Institute and the Communikey Festival of Electronic Arts, http://communikey.us. 6-7:30 p.m. Friday, April 26, Cofrin Auditorium, ATLAS 100