Longtime Canadian Musician contributor Kevin Young had plenty of leftover material from his extensive interviews for "The Evolution of the Art," his feature story on the past, present, and future of music and production technologies and workflows for our March/April 2019 40th Anniversary issue.
Here's more from his conversation with renowned science fiction author John Scalzi.
When we spoke to Scalzi, he downplayed his own musical talents. “That said I enjoy Canadian musicians, so if you want someone to expound for 30 minutes about how awesome Daniel Lanois is, I’m you’re man.”
CM: Still, you released your own record – Music For Headphones – that counts. What’s your musical background?
JS: I started playing drums in high school. That was my primary instrument. Around the turn of the century, I started playing with stringed instruments and composing software. I mostly just fiddle around with it rather than try to use it as anything serious. I do enjoy it. It’s a serious hobby, and I can tell it’s a serious hobby because I’ve devoted a ridiculous amount of money to it. I just wish my talents were commensurate to the money I’ve put into it.
CM: Before getting into the technology you create in your novels, I’d like to know, do you have any thoughts on where you see the creation of music going?
JS: The thing that’s really interesting to me is the fact that, particularly in the last 15-20 years, it’s become so much more democratic in the sense of being able produce music of a certain level of quality that you were not able to get to before, but also the distribution of that. And we have to be careful when we talk about the distribution because… streaming audio has downsides in terms of how people make actually make money, but it does mean that someone like me, who has somewhat constrained musical talent, is nevertheless able to record something that sounds reasonably good and then send it out so an audience can find it. I think what’s going to be interesting is how that continues to evolve. And also how music is being listened to these days. That’s changed dramatically even in the last 10 years.
CM: Absolutely. Anyone can make a record now. Before there was a barrier to entry – that doesn’t necessarily make the product better or worse, but it certainly makes for more of it. And, I think, that makes it harder to focus.
JS: Well, for the consumer to focus it’s harder because it makes it much more difficult to find things… The curation aspect of discovering new music has changed dramatically… I do think that, for people who are listening to music, it’s more difficult than it used to just find a radio station they like… I think you’re going to rely more and more on a computer to tell you what other songs you would like, from whatever band the computer thinks is similar, for whatever reason. And that’s going to progress. We are in a very rudimentary stage of that now. In the next 10 years I think it’s going to get refined just like so many other things involving machine learning and artificial intelligence have been. The most obvious example of that for me is photography. I have a Google Pixel phone and it takes better photos than the DSLR camera I had 10 years ago.
CM: In speaking to a number of musicians, one in particular said to me, ‘I just want some way to get what’s in my head, like a specific drum sound, and narrow it down so the computer spits options out at me.
JS: We’re at the very early stages of that, and, again, that’s only going to continue to happen more often, but what your friend is going to find out is that it’s not going to give her necessarily the things that she’s heard in her head. It’s going to give her things she didn’t know that she wanted until she heard him. That’s the thing that’s going to be interesting. The thing about drum, guitar sounds, or anything with synthesizers, you can already do that. So that’s not what I think people are going to end up using this sort of technology for because that’s already kind of baked in. What they are going to do is the thing where the computer spits out eight bars of something and you’re like, okay, I didn’t know that’s what I wanted, but then you can go in and tweak so it’s more like what you want, then it branches you off, or you put that back into the generator and say, "Give me more like this," and you either use that as a seed, or just take off from there… If we are going to use generative music, as I think a lot of people will, it will become the auto-tune of the 2020s and 2030s. Then, what does it mean to compose? There are obviously people who have been doing this for years. I mean Brian Eno made a cottage industry out of it. But as it becomes much more commonplace, when you allow technology to take such a role in composition and music making, will that increase or decrease the amount of input that the humans have to make to make something listenable? That’s going to be a real interesting question.
CM: The issue is that people are afraid they’ll be replaced. I mean, when the drum machine came out people freaked out, but…
CM: If a mixture of generative composition tools and AI can replace musicians… If I could say, "Give me Side 1 of The Wall and Side 2 of Zeppelin 1 and let’s see what I get," are people going to use that to make great art and great music, because they’ll become so adept at the manipulation of that technology?
JS: The first thing that we have to acknowledge, and this is a case that happens in every medium where there is a new technology and people are worried about, is, "Is this going to replace this or that or anything else?" Almost nothing ever gets totally replaced, right? What happens is the new technology carves out a space that is partly part of the old space and partly a new space, and then everything configures around that. One good example of this is when television showed up. Everybody was like "Is this the end of film?" Then things settled down and television was good for some things and movies were good for others. Life moves on. In my field, publishing, when e-books arrived, people were saying, "Is this is the end of the paperback?" And, as it turns out, no, it isn’t. The technology creates a new space. So whatever technology you have is going to create new art that favours and manipulates and uses whatever new technology you have. Now, the question is, will this new technology create great art? And the answer is, probably it will. It will also create a ton of mediocre art and an immense shit load of bad art because that’s the way things always work. So, for example, if the technology creates generative stems that then people use to sequence into music, is that going to create great music? It will. Because ultimately there’s still a human behind the wheel who’s going to be curating it, and have the drive to make that music mean something. You can create generative music until the cows come home and it’s mostly going to be aural wallpaper. Now, there’s nothing wrong with aural wallpaper...
CM: No. People create it now and it can be perfectly soothing.
JS: Exactly. But if you’re going to take that information and say, "All right, how am I going to make this into something that’s going to move someone emotionally; that’s going to makes them feel happy, or excited, or sad, or whatever?" You can plug in all that sort of stuff into machine learning and generate all that sort of stuff, but there still has to be someone there to sequence it. Machines are very good at dealing with the input that humans put in, but the human being is always the seat of music, right? They are the initial input from which the machine then generates whatever it generates. And that’s probably going to remain the same for a very long time. With any new generation, people go ‘I don’t know if that’s art. I don’t know if that’s good.’ There’s that saying, I think it was Douglas Adams… He said, basically: all the technology before age twenty-five is stuff that’s always been in the world so you don’t even think about it. From 25 to 35 it’s,"This is amazing new technology. What can I do with it?" And then any technology created after you’re age 35 is, "I don’t like this. It’s messing with my worldview." You will always have the people who are going to naysay about rap or rock ‘n’ roll or jazz or swing or big band; that it’s the devil’s music, or it’s not real music, or whatever. The thing is, to the people who are creating it, who are engaged and who like it, who listen to it and incorporate it into their lives, ultimately they don’t care whether you like or not, because it’s their thing. So there will always be that creative tension… Here is this new thing. Does it have staying power? Are we going to be listening to stuff that that began as curated, generative music 40, 50, 60 years from now? Probably we will, and when that happens you won’t even think about whether or not it’s music. It will just be part of the landscape.
Re-reading this, I’m reminded that when Scalzi said that, I thought, "And what do I care? 40, 50, certainly 60 years from now, I’ll be part of the landscape, literally, as worm chow or a few specks of ash in a glorified Tupperware container. Now, if I lived in the universe, the future that Scalzi’s Old Man’s War is set in, I would care because I would have an option.
For those unfamiliar with Scalzi’s work I’m going to sum up the OMW premise. This effort will not do justice to the books. For me, the series (and Scalzi’s 2012 novel, Redshirts) ranks very highly on the list of must read sci-fi being that Scalzi’s books tend to be as entertaining as they are thought provoking.
CM: I wanted to get into some of the technology from Old Man’s War. I was struck by how much some of the advances remind me a bit of the music industry. The idea of genetically engineered soldiers with enhanced musculature, green skin, yellow cat-like eyes – Bowie? Kiss? But it’s the idea of the interconnection between the members of the Colonial Defense Forces (CDF). That’s something I find interesting. Through the lens of something Bob Ezrin said in an interview for this piece, we were talking about virtual reality music creation – a virtual studio. With the kind of technology you’re talking about in OMW, the BrainPal, specifically, that would allow you to create that kind of environment for a group of musicians. How would you see that playing out?
JS: Well, the analogue world works in a particular way. And to try to re-create that in the digital world, or the world of computers or technology, it’s always going to be a simulation. It’s not going to be the same. And this is the path that technology takes. Often the first use of the technology, whatever it is, is to try to replicate what already exists, right? The first time you’d hear a synthesizer it would mostly be trying to replicate strings cheaply or making piano sound, that sort of stuff. Then the next step was people going, ‘why are we limiting ourselves to an acoustic metaphor when we have all these other things we can do? So the second generation of music featuring synthesizers was less about known sounds than creating unknown sounds. So, if you had a technology like we had in Old Man’s War, where you could create soundscapes, basically in your brain, and transfer those soundscapes to other people with no loss of fidelity, you end creating an entirely new type of music that would be consistent with that metaphor. I mean how would music sound when it does not, in fact, rely on acoustics at all? That all it relies on is how your brain processes this particular signal? One way or another it’s always about how the brain processes the signal, you either hear it or you don’t. So in this case the acoustics of the room, the quality of the actual instrument; the age of the wood, the glue that holds it together – talking about a Stradivarius – all those little elements no longer are relevant... How would music be different if you didn’t have to deal with your ears? If your brain still processed it as music, but the signal came in an entirely different way? That’s the interesting speculation – where we’re fundamentally trying to describe a thing that doesn’t currently exist that that technology would create. That’s a really exciting thing. I don’t address it too much in the books because in the books, you know, I’m trying to kill aliens, but that is a real issue.
In the second book (The Ghost Brigades), for example, it follows the special forces, and they do this thing where their communication, it’s not like other human communication. They speak at a rate that’s so much faster than previously unmodified humans, so the special forces are like, ‘All humans are slow, and so you have to deal with the fact that you have to talk to them very slowly and clearly." But that’s because they were raised as digital natives in that particular metaphor. And so that’s a case where their acoustics or their appreciation of speech is a completely different thing. Imagine what their songs would be like. If you can communicate to somebody ten times faster, how would you create music that can transfer ten times as fast, but still have the same emotional impact wallop and impact? For anybody who is listening who isn’t modified it would sound like a blip and, yet, for the special forces soldiers, it could be this full, rich, complex piece. And that’s part of the speculation of science fiction. We can imagine imagining something and how cool it would be. That’s what drives a lot of technology. People read science fiction and there’s something that’s cool, and they go, "I wish I could have that." Then the engineers and the inventors and the musicians start working towards that goal.
CM: The idea of nanotechnology – the combat armour, the human mind/weaponry interface – the idea of just telling technology what to do and it does it. Applied to music, how would you see that playing out for performers?
JS: Well, again, performers will use whatever technology you give them. So, as far as that goes, any technology that would be able to enhance the music in some way or another, artists are obviously are going to use and exploit that, and, again, you would have that same issue; where people are, (monster truck voice) "Well, this is the rock ‘n’ roll that I know. Back in my day all you needed was an amp." So they would get cranky about it. But to some extent you’re probably going to see that real soon anyway.
CM: It’s already happened to an extent.
JS: Yeah. The whole thing about Marshmello doing a concert in Fortnite, but here’s another thing… At the point that augmented reality glasses become commonplace, which will probably in the next 20 years, mostly they’re going to be used so that you can follow arrows while you’re walking and look at messages without having to look at a screen, or something like that. But ultimately somebody’s going to say, "Come to such and such a place at such and such a time, bring the following, compatible AR glasses," and they will create a light and sound show that’s going to be specific to that. It will augment what’s going on onstage, so even if you don’t have the glasses there’ll still be something to look at, but that augmentation will make it a more complete experience and, in many ways, that will be the native experience. It won’t be augmented experience. It will be, "This is what is intended for you to experience" – if you take off your glasses you’ll still get some songs, and that’s great, but putting on the glasses is the way it’s meant to be… I think that there are going to be practical aspects of it as well. Quite frankly, the person who runs the space the musicians are playing in would probably rather not have to put all the money into lights and lasers and fog banks and everything else. If that can all be offloaded onto a pair of glasses that’s great. It’s also going to be better for musicians because, in many ways, they can give a great audio visual experience even if ultimately it’s just them up there. That sort of augmentation, in many ways, is going to make for a more fulfilling experience for a whole generation of people who will learn to expect that because that’s the way it’s always been for them.
CM: Putting the technology together – the BrainPal, the level of communications between members of the special and nanotechnology together – theoretically, why couldn’t the musician download their experience so an audience member can be virtually on stage feeling what the artist is feeling, inhabiting their perspective, not just visually, but physically.
JS: I think visually is going to be the first place that’s going to happen. You don’t need nanotechnology to do that. You just need advanced sensor technology. Now, I always warn people when they talk about the BrainPal and how cool they are, that they probably don’t really want one because that’s basically, "I’m going to put wires in your head." And considering how quickly we upgrade our phones the idea that you’re going to have a network in your head, that’s going to be there for the rest of your life, that probably you’re not going to want to replace every two years; that’s going to turn your brain into pudding.
CM: When it comes to OMW technology and that that kind of thing could very well come down the pipe in the future, does it scare the hell out of you or is it something you’d welcome?
JS: I don’t want to be the person that says technology it’s morally neutral because I think there are cases where technology is not meant to be morally neutral; that it is meant to be positive or negative. That said, it’s ten percent of what is the thing meant to be? And then ninety percent is how do people decide that they’re going to use it? For example, you could create something designed to create generative music, which sounds perfectly innocuous, right? But we also know that one of the ways that people torture other people is to play very, very loud music, without ceasing, for hours and hours at a time. If you could create a generative music pattern that is specifically tuned to be unbearable to listen to and play it for three hours straight, that is a human deciding that the technology is going to be used in this terrible way. And the technology, in and of itself, is relatively blameless for it; that’s not what the technology was designed to be used for. But that is what people decided to use it for. I believe that pretty much that every technology created by humans is going to be used positively and negatively. And that goes all the way back to the first time humanity discovered fire – fire cooked our food and that was great. It also burned down the houses of all those who opposed a particular tribe, so there’s that as well. No matter what your technology is, there is always going to be somebody who is going to use it in a way you may find unpleasant, or disgusting, or morally compromised, or just wrong. And a lot of it will come down to what it is that people decide what they want to use it for. I mean we’re already seeing technology acoustic technology being used for things like riot control.
I don’t want to say it’s always going to be the case because people make choices, but I do think that people, when they see technology the thing that they’ve got to look at is, how can I use this to achieve my particular ends? And not everybody has perfect ends. I mean everybody thought the Internet was going to be wonderful; this worldwide communications network, we’ll be able to talk to each other and have wonderful conversations and world peace and all this sort of stuff. And then 2016 happened and in the United States and in the U.K., and that was at least partly contingent on, you know, bad actors leveraging the susceptibility of people to look at information that frightens or confuses them, or reinforces their worldview... So, again, the only thing that we can say about technology with any sort of long-term success is that it will always inevitably be used in ways we never anticipate. Sometimes that will be positive. Sometimes that will be negative, but that will always happen. With respect to the future of music it can happen the same way. You can imagine a future where new genres and new experiences of music happen that are positive and bring people together and create new, genius works of art. We can also imagine ways where that’s used to hurt, divide and conquer people. It really comes down to, what do people want to use it for?