Mar 13

Fabricated: Scientists develop method to synthesize the sound of clothing for animations →

Developments in CGI and animatronics might be getting alarmingly realistic, but the audio that goes with it often still relies on manual recordings. A pair of associate professors and a graduate student from Cornell University, however, have developed a method for synthesizing the sound of moving fabrics — such as rustling clothes — for use in animations, and thus, potentially film. The process, presented at SIGGRAPH, but reported to the public today, involves looking into two components of the natural sound of fabric, cloth moving on cloth, and crumpling. After creating a model for the energy and pattern of these two aspects, an approximation of the sound can be created, which acts as a kind of “road map” for the final audio.

The end result is created by breaking the map down into much smaller fragments, which are then matched against a database of similar sections of real field-recorded audio. They even included binaural recordings to give a first-person perspective for headphone wearers. The process is still overseen by a human sound engineer, who selects the appropriate type of fabric and oversees the way that sounds are matched, meaning it’s not quite ready for prime time. Understandable really, as this is still a proof of concept, with real-time operations and other improvements penciled in for future iterations. What does a virtual sheet being pulled over an imaginary sofa sound like? Head past the break to hear it in action, along with a presentation of the process.

Mar 13
aqqindex:

Christopher Chiappa, Performance Anxiety

aqqindex:

Christopher Chiappa, Performance Anxiety

Mar 13
Mar 13
Mar 20

Biodegradable transistors created from proteins found in the human body →

(Gizmag) In a bid to develop a transistor that didn’t need to be created in a “top down” approach” as is the case with silicon-based transistors, researchers at Tel Aviv University (TAU) turned to blood, milk and mucus proteins. The result is protein-based transistors the researchers say could form the basis of a new generation of electronic devices that are both flexible and biodegradable.

When the researchers applied various combinations of blood, milk, and mucus proteins to any base material, the molecules self-assembled to create a semi-conducting film on a nano-scale. Each of the three different kinds of proteins brought something unique to the table, said TAU Ph.D. student Elad Mentovich, and allowed the team to create a complete circuit with electronic and optical capabilities.

The blood protein’s ability to absorb oxygen permitted the doping of semi-conductors with specific chemicals to create particular properties. Milk proteins, which boast impressive strength in difficult environments, were used to form the fibers that are the building blocks of the team’s transistor. While mucosal proteins, with their ability to keep red, green and blue fluorescent dyes separate, were used to create the white light emission necessary for advanced optics.

By taking advantage of the natural abilities of each protein, the researchers were able to control various characteristics of the transistor, including adjusting conductivity, memory storage, and fluorescence, among other things.

The research team, which also includes Ph.D. students Netta Hendler and Bogdan Belgorodsky, supervisor Dr. Shachar Richter and Prof. Michael Gozin, believes their new transistor could play a big role in the transition from a silicon to a carbon era.

They say the protein-based transistor would be ideal for replacing silicon, which exists in wafer form and shatters when bent, leading to a new range of flexible technologies, such as displays, mobile phones, tablets, biosensors, and microchips. Additionally, the resulting products could also be biodegradable, helping to address the growing problem of electronic waste.

The researchers say they have already taken the first steps towards creating biodegradable displays, and aim to use the protein-based transistor technology to develop entire electronic devices.

Feb 01

SPELEOLOGICAL SUPERPARKS →


(BLDGBLOG) As part of an overall strategy to rebrand itself not as a city of gambling and slot machines—not another Las Vegas—but as more of a gateway to outdoor sports and adventure tourism—a kind of second Boulder or new Moab—Reno, Nevada, now houses the world’s largest climbing wall, called BaseCamp, attached to the side of an old hotel.

[Image: The wall; photo by BLDGBLOG].

BaseCamp is “a 164-foot climbing wall, 40 feet taller than the previous world’s highest in the Netherlands,” according to DPM Climbing. “The bouldering area will also be world-class with 2900 square feet of overhanging bouldering surface.” 

You can see a few pictures of those artificial boulders over at DPM.

[Image: The wall; photo by BLDGBLOG].

Fascinatingly, though, the same company who designed and manufactured this installation—a firm called Entre Prises—also makes artificial caves.

One such cave, in particular, created for and donated to the British Caving Association, is currently being used “to promote caving at shows and events around the country. It is now housed in its own convenient trailer and is available for use by Member Clubs and organizations.” 

[Image: The British Caving Association’s artificial cave, designed by Entre Prises; photo by David Cooke].

These replicant geological forms are modular, easily assembled, and come in indoor and outdoor varieties. Indoor artificial caves, we read, “are usually made from polyester resin and glass fibre as spraying concrete indoors is often not very practicable. Indoor caves provide the experience of caving without some of the discomforts of natural or outdoor caves: the air temperature can be relatively easily controlled, in most cases specialist clothing is not required [and] the passage walls are not very thick so more cave passage can be designed to go into a small area.” 

Further, maintaining the exclamation point from the original text: “The modular nature of the Speleo System makes it possible to create any cave type and can be modified in minutes by simply unbolting and rotating a section! This means you can have hundreds of possible caving challenges and configurations for the price of one.”

It would be interesting to live in a city, at least for a few weeks, ruled by an insane urban zoning board who require all new buildings—both residential and commercial—to include elaborate artificial caves. Not elevator shafts or emergency fire exits or public playgrounds: huge fake caves torquing around and coiling through the metropolis. Caves that can be joined across property lines; caves that snake underneath and around buildings; caves that arch across corporate business lobbies in fern-like sprays of connected chambers. Plug-in caves that tour the city in the back of delivery trucks, waiting to be bolted onto existing networks elsewhere. From Instant City to Instant Cave. Elevator-car caves that arrive on your floor when you need them. Caves on hovercrafts and helicopters, detached from the very earth they attempt to represent.

This brings to mind the work of Carsten Höller, implying a project someday in which the Turbine Hall in London’s Tate Modern could be transformed into the world’s largest artificial cave system, or perhaps even a future speleo-superpark in a place like Dubai, where literally acres of tunnels sprawl across the landscape, inside and outside, aboveground and below ground, in unpredictably claustrophobic rearrangeable prefab whorls.

The “outdoor” varieties, meanwhile, are actually able “to be buried within a hillside”; however, they “must be able to withstand the bearing pressure of any overlying material, eg. soil or snow. This is usually addressed by making the caving structures in sprayed concrete that has been specifically engineered to withstand the loads. Alternatively the cave passages can be constructed in polyester resin and glass fibre but then they have to be within a structural ‘box’ if soil pressure is to be applied.” 

In any case, here are some of the cave modules offered by Entre Prises, a kind of cave catalog called the Speleo System—though it’s worth noting, as well, that “To add interest within passages and chambers, cave paintings and fossils can be added. This allows for user interest to be maintained, creating an educational experience.”

[Image: The Speleo System by Entre Prises].

As it happens, Entre Prises is also in the field of ice architecture. That is, they design and build large, artificially maintained ice-climbing walls

These “artificial ice climbing structures… support natural ice where the air temperature is below freezing point.” However, “permanent indoor structures,” given “a temperature controlled environment,” can also be created. These are described as “self generating real ice structures that utilize a liquid nitrogen refrigeration system.”

[Images: An artificial ice structure by Entre Prises for the Winter X Games].

Amongst many things, what interests me here is the idea that niche sports enthusiasts—specifically cavers and climbers—have discovered and, perhaps more importantly,financially support a unique type of architecture and the construction techniques required for assembling it that, in an everyday urban context, would appear quite eccentric, if not even avant-garde.

Replicant geological formations in the form of modular, aboveground caves and artificially frozen concrete towers only make architectural and financial sense when coupled with the needs of particular recreational activities. These recreational activities are more like spatial incubators, both inspiring and demanding new, historically unexpected architectural forms.

So we might say that, while architects are busy trying to reimagine traditional building typologies and architectural programs—such as the Library, the Opera House, the Airport, the Private House—these sorts of formally original, though sometimes aesthetically kitsch, designs that we are examining here come not from an architecture firm at all, or from a particular school or department, but from a recreational sports firm pioneering brand new spatial environments. 

As such, it would be fascinating to see Entre Prises lead a one-off design studio somewhere, making artificial caves a respectable design typology for students to admit they’re interested in, while simultaneously pushing sports designers to see their work in more architectural terms and prodding architects to see niche athletes as something of an overlooked future clientele.

Jan 27

Streetview Stereographic is Warping Google Maps →

(Motherboard) Until now, I’ve almost exclusively associated hyperbolic fish-eye visuals with the stuff of machine elves and shitty skateboarding videos. But no longer – I’ll be damned if a new hack on Google Maps isn’t already letting me ride out this balmy Friday in relative projected complacency.

Streetview Stereographic (http://notlion.github.com/streetview-stereographic/) spins Google Street View data into beautiful, bulbous stereographic images. It’s probably the easiest, trippiest time-suck this side of 2012, offering fresh inversions of drab, oppressive urban milieus. Simply enter an address or coordinate set and suddenly geometry ain’t so bad and you’re living life in the proverbial fish bowl, to boot. And what’s great is the program allows users to manipulate their desired locations: Click and hold on the left panel (pro tip: maximize to full screen) and wrap pavement patches into microcosmoses or flip street views outward into “swirling vortexes of urban fabric.” The possibilities are seemingly infinite.

If I can wage one complaint against the spate of attention showered on this hack over the past few days is that both the original fish-eye spins I’ve seen – and the program’s default projections (when I fired up Streetview Stereographic moments ago, up popped Zuccotti Park, epicenter of the Occupy movement) – have thus far been too busy, too cluttered. This can kill the stereographic effect. If anything, clean,simple locations and facades – monoliths, say – actually maximize the effect. In this case, less truly is more.

Also, the hack is finicky. With a few exceptions, it doesn’t yet seem to allow for locations that are even remotely off the beaten path. This means no Devils Tower or Ayers Rock or most any other awe-inspiring natural monoliths. Bust.

But hey, don’t get me wrong. Without question I’ll be fiddling around with this thing a whole lot more. Below are a few manmade monoliths, spires and towers that I’ve managed to dig up and contort. Some are painfully obvious. Others (maybe?) aren’t. Hit the replies if you recognize any. I’ve provided cities to help you narrow things down.

Paris

Washington, D.C.

London

Malmö

Madrid

Rome

Nov 11

Microsoft's new vision video →

So, here’s a Vision Of The Future that’s popular right now.

It’s a lot of this sort of thing.

As it happens, designing Future Interfaces For The Future used to be my line of work. I had the opportunity to design with real working prototypes, not green screens and After Effects, so there certainly are some interactions in the video which I’m a little skeptical of, given that I’ve actually tried them and the animators presumably haven’t. But that’s not my problem with the video.

My problem is the opposite, really — this vision, from an interaction perspective, is not visionary. It’s a timid increment from the status quo, and the status quo, from an interaction perspective, is actually rather terrible.

This matters, because visions matter. Visions give people a direction and inspire people to act, and a group of inspired people is the most powerful force in the world. If you’re a young person setting off to realize a vision, or an old person setting off to fund one, I really want it to be something worthwhile. Something that genuinely improves how we interact.

This little rant isn’t going to lay out any grand vision or anything. I just hope to suggest some places to look.

Before we think about how we should interact with our Tools Of The Future, let’s consider what a tool is in the first place.

I like this definition:   A tool addresses human needs by amplifying human capabilities.

That is, a tool converts what we can do into what we want to do. A great tool is designed to fit both sides.

In this rant, I’m not going to talk about human needs. Everyone talks about that; it’s the single most popular conversation topic in history.

And I’m not going to talk about about technology. That’s the easy part, in a sense, because we control it. Technology can be invented; human nature is something we’re stuck with.

I’m going to talk about that neglected third factor, human capabilities. What people can do. Because if a tool isn’t designed to be used by a person, it can’t be a very good tool, right?

Take another look at what our Future People are using to interact with their Future Technology:

Do you see what everyone is interacting with? The central component of this Interactive Future? It’s there in every photo!

That’s right! —

HANDS

And that’s great! I think hands are fantastic!

Hands do two things. They are two utterly amazing things, and you rely on them every moment of the day, and most Future Interaction Concepts completely ignore both of them.

Hands feel things, and hands manipulate things.

Go ahead and pick up a book. Open it up to some page.

Notice how you know where you are in the book by the distribution of weight in each hand, and the thickness of the page stacks between your fingers. Turn a page, and notice how you would know if you grabbed two pages together, by how they would slip apart when you rub them against each other.

Go ahead and pick up a glass of water. Take a sip.

Notice how you know how much water is left, by how the weight shifts in response to you tipping it.

Almost every object in the world offers this sort of feedback. It’s so taken for granted that we’re usually not even aware of it. Take a moment to pick up the objects around you. Use them as you normally would, and sense their tactile response — their texture, pliability, temperature; their distribution of weight; their edges, curves, and ridges; how they respond in your hand as you use them.

There’s a reason that our fingertips have some of the densest areas of nerve endings on the body. This is how we experience the world close-up. This is how our tools talk to us. The sense of touch is essential to everything that humans have called “work” for millions of years.

Now, take out your favorite Magical And Revolutionary Technology Device. Use it for a bit.

What did you feel? Did it feel glassy? Did it have no connection whatsoever with the task you were performing?

I call this technology Pictures Under Glass. Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade.

Is that so bad, to dump the tactile for the visual? Try this: close your eyes and tie your shoelaces. No problem at all, right? Now, how well do you think you could tie your shoes if your arm was asleep? Or even if your fingers were numb? When working with our hands, touch does the driving, and vision helps out from the back seat.

Pictures Under Glass is an interaction paradigm of permanent numbness. It’s a Novocaine drip to the wrist. It denies our hands what they do best. And yet, it’s the star player in every Vision Of The Future.

To me, claiming that Pictures Under Glass is the future of interaction is like claiming that black-and-white is the future of photography. It’s obviously a transitional technology. And the sooner we transition, the better.

What can you do with a Picture Under Glass? You can slide it.

That’s the fundamental gesture in this technology. Sliding a finger along a flat surface.

There is almost nothing in the natural world that we manipulate in this way.

That’s pretty much all I can think of.

Okay then, how do we manipulate things? As it turns out, our fingers have an incredibly rich and expressive repertoire, and we improvise from it constantly without the slightest thought. In each of these pictures, pay attention to the positions of all the fingers, what’s applying pressure against what, and how the weight of the object is balanced:

Many of these are variations on the four fundamental grips. (And if you like this sort of thing, you should read John Napier’s wonderful book.)

Suppose I give you a jar to open. You actually will switch between two different grips:

You’ve made this switch with every jar you’ve ever opened. Not only without being taught, but probably without ever realizing you were doing it. How’s that for an intuitive interface?

We live in a three-dimensional world. Our hands are designed for moving and rotating objects in three dimensions, for picking up objects and placing them over, under, beside, and inside each other. No creature on earth has a dexterity that compares to ours.

The next time you make a sandwich, pay attention to your hands. Seriously! Notice the myriad little tricks your fingers have for manipulating the ingredients and the utensils and all the other objects involved in this enterprise. Then compare your experience to sliding around Pictures Under Glass.

Are we really going to accept an Interface Of The Future that is less expressive than a sandwich?

So then. What is the Future Of Interaction?

The most important thing to realize about the future is that it’s a choice. People choose which visions to pursue, people choose which research gets funded, people choose how they will spend their careers.

Despite how it appears to the culture at large, technology doesn’t just happen. It doesn’t emerge spontaneously, like mold on cheese. Revolutionary technology comes out of long research, and research is performed and funded by inspired people.

And this is my plea — be inspired by the untapped potential of human capabilities. Don’t just extrapolate yesterday’s technology and then cram people into it.

This photo could very well could be our future. But why? Why choose that? It’s a handheld device that ignores our hands.

Our hands feel things, and our hands manipulate things. Why aim for anything less than a dynamic medium that we can see, feel, and manipulate?

There is a smattering of active research in related areas. It’s been smattering along for decades. This research has always been fairly marginalized, and still is. But maybe you can help.

And yes, the fruits of this research are still crude, rudimentary, and sometimes kind of dubious. But look —

In 1968 — three years before the invention of the microprocessor — Alan Kay stumbled across Don Bitzer’s early flat-panel display. Its resolution was 16 pixels by 16 pixels — an impressive improvement over their earlier 4 pixel by 4 pixel display.

Alan saw those 256 glowing orange squares, and he went home, and he picked up a pen, and he drew a picture of a goddamn iPad.

And then he chased that carrot through decades of groundbreaking research, much of which is responsible for the hardware and software that you’re currently reading this with.

That’s the kind of ambitious, long-range vision I’m talking about. Pictures Under Glass is old news. Let’s start using our hands.

Sep 24

Scientists Reconstruct Brains’ Visions Into Digital Video In Historic Experiment →


(Gizmodo) UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. Eventually, this process will allow you to record and reconstruct your own dreams on a computer screen.

I just can’t believe this is happening for real, but according to Professor Jack Gallant—UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology—”this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds.

Indeed, it’s mindblowing. I’m simultaneously excited and terrified. This is how it works:

They used three different subjects for the experiments—incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain’s blood flow through their brains’ visual cortex.

The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.

An 18-million-second picture palette

After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.

Think about those 18 million seconds of random videos as a painter’s color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he’s seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.

Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain.

In this other video you can see how this process worked in the three experimental targets. On the top left square you can see the movie the subjects were watching while they were in the fMRI machine. Right below you can see the movie “extracted” from their brain activity. It shows that this technique gives consistent results independent of what’s being watched—or who’s watching. The three lines of clips next to the left column show the random movies that the computer program used to reconstruct the visual information.

Right now, the resulting quality is not good, but the potential is enormous. Lead research author—and one of the lab test bunnies—Shinji Nishimoto thinks this is the first step to tap directly into what our brain sees and imagines:

Our natural visual experience is like watching a movie. In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.

The brain recorders of the future

Imagine that. Capturing your visual memories, your dreams, the wild ramblings of your imagination into a video that you and others can watch with your own eyes.

This is the first time in history that we have been able to decode brain activity and reconstruct motion pictures in a computer screen. The path that this research opens boggles the mind. It reminds me of Brainstorm, the cult movie in which a group of scientists lead by Christopher Walken develops a machine capable of recording the five senses of a human being and then play them back into the brain itself.

This new development brings us closer to that goal which, I have no doubt, will happen at one point. Given the exponential increase in computing power and our understanding of human biology, I think this will arrive sooner than most mortals expect. Perhaps one day you would be able to go to sleep wearing a flexible band labeled Sony Dreamcam around your skull. [UC Berkeley]

Aug 26

Microsoft’s Bill Buxton uploaded this video from 1970 that displays groundbreaking UI for early keyframe animation…