After many unsuccessful attempts, I FINALLY posted my flash animation successfully on my website! Here is a run-down of how it all happened:
1. I download Adobe Photoshop CS4 without realizing that you can't make flash animations in Photoshop (duh). I return to the Adobe website to download the correct program, Flash CS4.
2. I import a bunch of Christmas-y images to create A Very Special Christmas Flash, and play around with the different ways of manipulating the pictures to become a flash animation. (This was surprisingly the easiest part of the process.)
3. I save my work as a ".fla" file. I import this into my public.html directory and find that is NOT the way to post a flash on the internet.
4. I publish my work and import the "flash.html" into my public.html directory. Again it doesn't work.
5. I discover you need both the .html and the .swf files in one directory for the flash to run. Accordingly, I import both into public.html and still it doesn't work.
6. I find out that renaming the files after you have published will prevent the animation from playing. So, I import the .html and .swf files into public.html exactly as I named them when I published, and, voila, my animation appears.
I was often frustrated through this process because I didn't know what I was doing wrong, and even after searching the internet I couldn't find the answer. It seemed the only way to figure out the problem was with the help of the professor. (Thankfully the professor is almost always available and willing to help!)
So what does this all mean to the music educator? I think there will be many times when students keep trying and trying and don't quite "get it," whether the "it" be playing an instrument or singing or composing or improvising. Also, students usually don't have the resources at home to solve musical problems.
To alleviate this problem, I wonder how music educators can provide their students with resources for solving problems. The resources could be over a myriad of media. I could make practice CDs with examples of right and wrong ways of playing, and examples of interpretation. I could also post those audio files on the internet. Additionally, I would consider being available at a specified time by AIM or Gchat to answer practice questions in real time. (I know it might be really hard to get students to use that; they may not want to practice at that specific time. But I could take a survey to find out the most common practice times and fit my online chat into that time. I might even vary the day from week-to-week to give access to different students each week.)
In short, I can infer that music education doesn't stop when class is over. We put so much responsibility on our students to practice outside of class, but we don't provide them with the tools for successful practice sessions. I hope in the future I will be able to provide examples of successful practicing during class, then make available the resources I mentioned above. Hopefully this will produce students who can use their own know-how to solve musical problems, and know when and how to access available resources when their know-how falls short.
Monday, December 15, 2008
Tuesday, December 2, 2008
New Music Ensemble Concert
Last Monday I performed in the New Music Ensemble concert. It was a challenging class for me. For a long time I had a hard time accepting "new music" as Music. Although I was excited to explore the vast range of extended technique on the flute, for a while I felt I was doing little more than making a bunch of cool sounds. I had to find ways to make music out of this new sound vocabulary. I interacted with the other musicians as in a dialogue, I tried to make a beginning, middle, and end to my phrases and long-term improvisations. It was very difficult to see this as Music, though, since it didn't resemble any Music I'm used to listening to. I think if I spend more time listening to and performing new music I will understand the musicianship involved. Just like any type of Music, one will appreciate it most when one is a part of that music-making for an extended period of time. Over time one will learn the nuances of that Music that make it Music. In the meantime, though, I am glad for the experience and how it stretched my musical horizons.
There are a few aspects of the show I want to discuss. First of all, it was very theatrical. Prior to this concert, the only time I've seen lighting and video used at a musical performance is for a musical theater production. In the NME concert a few pieces used video, and most of them included lighting effects. A few of the videos were fixed media, like the audio tapes, and one was designed to react to the sounds we made on stage. However, we didn't interact with the video like we did with the tapes. Of course they were projected behind us, so we couldn't see them. But I wonder if we would have had a more cohesive performance if there had been musical responses to the fixed videos.
Another thing that interested me was the correlation I noticed between the improvised dancing and the improvised playing. The dancers seemed to use different parts of their bodies in as many ways as they could think of, and emphasized different movements at different times. Really, this is exactly what we were doing with our instruments. We used our knowledge of the sound repertoire to create an improvisation, and used as much of the instruments' capabilities as possible. We were instructed to respond to what we heard in the tapes, down to the smallest pitch or squeak. I could tell the dancers were responding to the general mood of the music, but I don't know if they were instructed to physically respond to every individual sound from the tapes. (Additionally, dancer-musician interactions could have made a more cohesive performance, but we didn't have enough rehearsal time together to work on dancer-musician improvisation.)
Because the dance moves were so exploratory like the musical elements, I think new music is a very difficult genre to bring to high school musicians. The dance could be seen as very silly if one doesn't expect those movements. And although I see the importance of introducing young students to all genres of music, especially through live performance, I think a large group of high schoolers would not be able to handle that performance. I think the mixed media would be appealing to them, but I'd be worried that they would not be able to take the whole performance seriously. I would love to find a way to introduce students to new music; it may be a genre they really like, and maybe they would someday contribute to the field! But I would definitely not make a field trip of the NME concert without some serious preparation beforehand.
There are a few aspects of the show I want to discuss. First of all, it was very theatrical. Prior to this concert, the only time I've seen lighting and video used at a musical performance is for a musical theater production. In the NME concert a few pieces used video, and most of them included lighting effects. A few of the videos were fixed media, like the audio tapes, and one was designed to react to the sounds we made on stage. However, we didn't interact with the video like we did with the tapes. Of course they were projected behind us, so we couldn't see them. But I wonder if we would have had a more cohesive performance if there had been musical responses to the fixed videos.
Another thing that interested me was the correlation I noticed between the improvised dancing and the improvised playing. The dancers seemed to use different parts of their bodies in as many ways as they could think of, and emphasized different movements at different times. Really, this is exactly what we were doing with our instruments. We used our knowledge of the sound repertoire to create an improvisation, and used as much of the instruments' capabilities as possible. We were instructed to respond to what we heard in the tapes, down to the smallest pitch or squeak. I could tell the dancers were responding to the general mood of the music, but I don't know if they were instructed to physically respond to every individual sound from the tapes. (Additionally, dancer-musician interactions could have made a more cohesive performance, but we didn't have enough rehearsal time together to work on dancer-musician improvisation.)
Because the dance moves were so exploratory like the musical elements, I think new music is a very difficult genre to bring to high school musicians. The dance could be seen as very silly if one doesn't expect those movements. And although I see the importance of introducing young students to all genres of music, especially through live performance, I think a large group of high schoolers would not be able to handle that performance. I think the mixed media would be appealing to them, but I'd be worried that they would not be able to take the whole performance seriously. I would love to find a way to introduce students to new music; it may be a genre they really like, and maybe they would someday contribute to the field! But I would definitely not make a field trip of the NME concert without some serious preparation beforehand.
Monday, November 24, 2008
Music Notation Software in the Classroom: True Music-Making?
When I was growing up, computers had black screens and green text, and were only accessible to the class when we earned a field trip to the computer lab to play Oregon Trail. It never crossed my mind- or my music teachers' minds, certainly!- that computers could be used in a music classroom!
This comes to mind because of a recent class discussion about teaching composition. I'll get to the specifics of the discussion in a moment.
When someone says "music technology in the classroom," one's default mental image is probably either recording technology, or notation software that every student can use. Finale even makes a barebones version, Notepad, that I encountered at my job two years ago. Apparently the previous teacher showed the students how to use Notepad, gave them some parameters (maybe), and the students composed. But were they really making music?
The principal was thrilled. She had a whole school full of little Mozarts working on Finale Notepad. She was constantly mentioning to me that I should get the laptops in my classroom to do some composition. However, I thumbed through some of the music portfolios from the previous year, and was not really surprised to see the low level of musicianship these compositions displayed.
I shouldn't judge too hard. After all, these students were only in their first year or two of music class. But it seemed like they were all boxed into the same assignment, with no leeway to incorporate musical elements that were important to them. They all had similar usage of the E Major scale, or whatever it happened to be. It looked like there wasn't any joy in it.
So I wonder: Is there a better way to teach composition? For students who have never encountered music before, perhaps notation is not the place to start. They may feel very restricted, and may have little interest in writing the usual tonal, Western-approved-type of music. Instead, start with something that will appeal to them, and spark their interest and creativity. I'm not sure what that is yet, but I wouldn't want to intimidate them by forcing unfamiliar notation on them, especially in the form of an often-confusing computer program.
After they know how to read music well, and are able to hear what they want to write before they put something down on paper, then students should be introduced to notation software. Until then, take the class set of Finale Notepad downloads out of the budget, and instead get some more instruments repaired.
This comes to mind because of a recent class discussion about teaching composition. I'll get to the specifics of the discussion in a moment.
When someone says "music technology in the classroom," one's default mental image is probably either recording technology, or notation software that every student can use. Finale even makes a barebones version, Notepad, that I encountered at my job two years ago. Apparently the previous teacher showed the students how to use Notepad, gave them some parameters (maybe), and the students composed. But were they really making music?
The principal was thrilled. She had a whole school full of little Mozarts working on Finale Notepad. She was constantly mentioning to me that I should get the laptops in my classroom to do some composition. However, I thumbed through some of the music portfolios from the previous year, and was not really surprised to see the low level of musicianship these compositions displayed.
I shouldn't judge too hard. After all, these students were only in their first year or two of music class. But it seemed like they were all boxed into the same assignment, with no leeway to incorporate musical elements that were important to them. They all had similar usage of the E Major scale, or whatever it happened to be. It looked like there wasn't any joy in it.
So I wonder: Is there a better way to teach composition? For students who have never encountered music before, perhaps notation is not the place to start. They may feel very restricted, and may have little interest in writing the usual tonal, Western-approved-type of music. Instead, start with something that will appeal to them, and spark their interest and creativity. I'm not sure what that is yet, but I wouldn't want to intimidate them by forcing unfamiliar notation on them, especially in the form of an often-confusing computer program.
After they know how to read music well, and are able to hear what they want to write before they put something down on paper, then students should be introduced to notation software. Until then, take the class set of Finale Notepad downloads out of the budget, and instead get some more instruments repaired.
Monday, November 10, 2008
Jazz at Lincoln Center
I was the lucky recipient of a friend's extra ticket to see the fall gala performance at the Rose Theater tonight. Wynton Marsalis performed with his quintet, and George Benson with his entourage of musicians. Ken Burns was also there; he was accepting the Ed Bradley award for leadership.
The reason I'm writing about this in my tech blog is to discuss the very basic argument of real instruments vs. synthesized instruments. Mr. Benson had a rhythm section (including the man who conducts Barbra Streisand's orchestra!), a live string orchestra, a chorus of backup singers, and an overworked synthesizer player. I wondered why he hired every instrument family except for winds and brass; surely a few more players could have been written into the budget? It would have added immensely to my enjoyment.
Mr. Benson was doing a tribute to Nat "King" Cole. His voice is already quite close to Mr. Cole's voice, and the strings' presence brought out the schmaltzy quality of Mr. Cole's signature sound. But as soon as I heard flutes, and didn't see any, I raised an eyebrow; seriously? He's got a synth player for that? It was a very good synthesizer, don't get me wrong; but no professional flute player would choose to keep such a steady, uncreative vibrato for those long notes. The tone was adequate but the expression wasn't there. It detracted from the performance because it seemed like he took an unneeded shortcut.
It was even more apparent when the synth player was doing an entire choir of brass hits. That was obviously inadequate. The sound wasn't present enough, and it was a little too tinny to be believable. I was almost distracted by its wimpiness.
Of course musicians can choose to use a synth player in place of a live performer for budget or space reasons. But Mr. Benson seemed to have both financial and spatial means. I don't know if Mr. Cole skimped on musicians, but my guess is he would have had as many live performers as possible. If Mr. Benson wanted to bolster his sound and his tribute, he should have gone with the real players, even if just for those few measures.
The reason I'm writing about this in my tech blog is to discuss the very basic argument of real instruments vs. synthesized instruments. Mr. Benson had a rhythm section (including the man who conducts Barbra Streisand's orchestra!), a live string orchestra, a chorus of backup singers, and an overworked synthesizer player. I wondered why he hired every instrument family except for winds and brass; surely a few more players could have been written into the budget? It would have added immensely to my enjoyment.
Mr. Benson was doing a tribute to Nat "King" Cole. His voice is already quite close to Mr. Cole's voice, and the strings' presence brought out the schmaltzy quality of Mr. Cole's signature sound. But as soon as I heard flutes, and didn't see any, I raised an eyebrow; seriously? He's got a synth player for that? It was a very good synthesizer, don't get me wrong; but no professional flute player would choose to keep such a steady, uncreative vibrato for those long notes. The tone was adequate but the expression wasn't there. It detracted from the performance because it seemed like he took an unneeded shortcut.
It was even more apparent when the synth player was doing an entire choir of brass hits. That was obviously inadequate. The sound wasn't present enough, and it was a little too tinny to be believable. I was almost distracted by its wimpiness.
Of course musicians can choose to use a synth player in place of a live performer for budget or space reasons. But Mr. Benson seemed to have both financial and spatial means. I don't know if Mr. Cole skimped on musicians, but my guess is he would have had as many live performers as possible. If Mr. Benson wanted to bolster his sound and his tribute, he should have gone with the real players, even if just for those few measures.
Thursday, November 6, 2008
Film Scores
I'm reading The Joy of Music by Leonard Bernstein, and he discusses his experience with On the Waterfront in the chapter "Interlude: Upper Dubbing, CA." ("Upper Dubbing" refers to the building where they put the sound effects and soundtrack into the movie, or "dub in" the sounds.) Bernstein describes the sound editors as geniuses who can listen to many different, seemingly contradictory, commands and somehow create an appropriate mix for the movie.
Bernstein was allowed to attend the editing sessions (a rare privilege for a composer), and added another element to the gathering: the advocate for keeping all of the music in tact. HIS music, in fact; he studied the film to write scene-appropriate music, and wrote complete pieces. Bernstein still included a logical beginning, middle, and end when he composed for the film, and his work is somewhat incomplete if any part, even a single bar, is removed.
Unfortunately for Bernstein, the music is often the first to go. He admits that the best film scores are those that the viewer never really notices. If you notice the music, it has covered up the most important element: the movie. So, when the technician removes the climax of a powerful crescendo to allow Marlon Brando's grunted line to come through, Bernstein can do little more than pout about it.
I found it interesting, though not surprising, that composers will fight for every bar to keep it in the film. I am not a composer, and I don't have the experience of agonizing over every note of a score. Of course Bernstein would want to save every last bar if he could; he labored over that music and put his heart and soul into it. And yes, it would be incomplete if any part were removed. If you removed any part of the exposition of a sonata, wouldn't that be incomplete as well? This all makes sense, but I had never thought about it before. I wonder if that is the case with all film scores; is there more to the soundtrack that we're not hearing, and is the composer devastated to find those parts missing when he watches the film?
Bernstein was allowed to attend the editing sessions (a rare privilege for a composer), and added another element to the gathering: the advocate for keeping all of the music in tact. HIS music, in fact; he studied the film to write scene-appropriate music, and wrote complete pieces. Bernstein still included a logical beginning, middle, and end when he composed for the film, and his work is somewhat incomplete if any part, even a single bar, is removed.
Unfortunately for Bernstein, the music is often the first to go. He admits that the best film scores are those that the viewer never really notices. If you notice the music, it has covered up the most important element: the movie. So, when the technician removes the climax of a powerful crescendo to allow Marlon Brando's grunted line to come through, Bernstein can do little more than pout about it.
I found it interesting, though not surprising, that composers will fight for every bar to keep it in the film. I am not a composer, and I don't have the experience of agonizing over every note of a score. Of course Bernstein would want to save every last bar if he could; he labored over that music and put his heart and soul into it. And yes, it would be incomplete if any part were removed. If you removed any part of the exposition of a sonata, wouldn't that be incomplete as well? This all makes sense, but I had never thought about it before. I wonder if that is the case with all film scores; is there more to the soundtrack that we're not hearing, and is the composer devastated to find those parts missing when he watches the film?
Tuesday, October 28, 2008
Sound Effects
Partners in Rhyme came through on their offer of free sound effects! They were more timely about it than I am about posting, though; I received the email only a day after I sent them the link to my blog. So, thanks, Partners in Rhyme, for the hours of free sound effects!
Monday, October 27, 2008
Musical Expression in Electronic Music
I attended a new music concert tonight at NYU. The performers were Elizabeth McNutt, flute, Dr. Esther Lamneck, clarinet, and... computer?? I've improvised on flute to a fixed electronic score before, but I have never seen a live performance of instrumentalists playing composed music to the improvisations of a computer! I don't quite understand the computer programming, so I'm sure I'm oversimplifying it. But I think the composers designed their computer programs to react to live sounds. And I was most impressed with the last piece, when that composer was on stage "playing" the computer.
It was a fascinating experience. The instrumentalists tonight were of course top-notch. It would have been just as enjoyable to listen to them play together. They varied their tone in extreme ways, yet still blended beautifully. Sometimes I couldn't tell if it was a flute or clarinet tone that I was hearing.
In addition to the live players, the computer contributed so much beyond what I thought was possible. I don't know to what extent the composers controlled the computer sounds, but the computerhad many roles. It filled in the gaps between the instruments' parts. It repeated the instruments' sounds, mixed them together, transposed them, varied them, or made new melodies from them. It was like a sentient participant in the piece.
What really moved me, though, was to see the last composer "play" the computer. My mental image of an electronic composer was not what I saw. I thought he would seem disconnected from the performance; he would merely be there to ensure the computer program does what it's supposed to. But instead, he played the computer as if he were playing another instrument. In his face was the concentration of a musician. His body movements gave away his musical intention: he anticipated entrances and showed phrase endings. He moved quickly and sharply for loud or sudden entrances, and more slowly or deliberately for subtler sounds. There weren't any traditional cadences or harmonic rhythm to clue me in to the form, so watching his body movements helped me understand where the music was going.
This may be a controversial post; is the concert I just saw really a Music concert? A few things I observed tonight pushed me closer to accepting new music as Music. Firstly, it is not all random sounds. Though there are fewer notes as people might recognize from Western classical music, there is a vocabulary of sounds written and used in a distinct way. Secondly, the composers wrote for particular instruments following an expanding tradition of music. It is not neo-Classical, and it is not meant to imitate a more "tonal" tradition. It is meant to expand on what we know as music, which in turn creates a new genre. Lastly, the physical motions that I mentioned above betray musical intention. The musicians on stage were trained in more conventional settings and acquired these movements from that education. They then apply that musicality to this new genre of music.
I have been struggling with the label of "Music" for electronic music, and seeing this concert gets me closer to accepting the label. My interaction with the live performers proved most important in determining a definition of Music.
It was a fascinating experience. The instrumentalists tonight were of course top-notch. It would have been just as enjoyable to listen to them play together. They varied their tone in extreme ways, yet still blended beautifully. Sometimes I couldn't tell if it was a flute or clarinet tone that I was hearing.
In addition to the live players, the computer contributed so much beyond what I thought was possible. I don't know to what extent the composers controlled the computer sounds, but the computerhad many roles. It filled in the gaps between the instruments' parts. It repeated the instruments' sounds, mixed them together, transposed them, varied them, or made new melodies from them. It was like a sentient participant in the piece.
What really moved me, though, was to see the last composer "play" the computer. My mental image of an electronic composer was not what I saw. I thought he would seem disconnected from the performance; he would merely be there to ensure the computer program does what it's supposed to. But instead, he played the computer as if he were playing another instrument. In his face was the concentration of a musician. His body movements gave away his musical intention: he anticipated entrances and showed phrase endings. He moved quickly and sharply for loud or sudden entrances, and more slowly or deliberately for subtler sounds. There weren't any traditional cadences or harmonic rhythm to clue me in to the form, so watching his body movements helped me understand where the music was going.
This may be a controversial post; is the concert I just saw really a Music concert? A few things I observed tonight pushed me closer to accepting new music as Music. Firstly, it is not all random sounds. Though there are fewer notes as people might recognize from Western classical music, there is a vocabulary of sounds written and used in a distinct way. Secondly, the composers wrote for particular instruments following an expanding tradition of music. It is not neo-Classical, and it is not meant to imitate a more "tonal" tradition. It is meant to expand on what we know as music, which in turn creates a new genre. Lastly, the physical motions that I mentioned above betray musical intention. The musicians on stage were trained in more conventional settings and acquired these movements from that education. They then apply that musicality to this new genre of music.
I have been struggling with the label of "Music" for electronic music, and seeing this concert gets me closer to accepting the label. My interaction with the live performers proved most important in determining a definition of Music.
Thursday, October 23, 2008
Sound Effects and Concrete Music
This week I'm working on a concrete music assignment. Concrete Music is the use of everyday sounds from the world around you for the creation of a collage, or a "sound sculpture," as Dr. Gilbert puts it. One can compose a wonderful work using the raw material of sound.
Concrete music is not appealing to everyone, and I'm sure not everyone would consider it music. There is not necessarily a time signature, though composers of concrete music probably have time in mind. Rhythmic figures may arise from repetition of certain sounds, or from the precise combination of sounds. There are no pitches or notes in the traditional sense. Many sounds we here are on a continuous spectrum of frequency. In other words, we don't live in a monotone world. But many things we hear we don't register as notes because they have not been clearly defined as definite pitches. Sirens, for example, are like one long glissando, up and down; but few people hear notes in a siren because there are not separate sounds to distinguish as different pitches.
Despite lack of time signature and pitches, concrete music is still musically appealing to some. A concrete music composer works to expand our definition of music by presenting a new collection of sounds to appeal to us like music does: there is structure; there is a beginning, a middle, and an end; there are sounds that may be foreign to music but make sense together.
I decided I wanted to use laughing sounds to create my composition. I did a Google search and found many websites with free downloadable sound effects. One of the most comprehensive websites was PartnersInRhyme.com. They also offer to give you a $50 library of sound effects if you mention them in your blog or on your website. I'll let you know how that turns out...
Concrete music is not appealing to everyone, and I'm sure not everyone would consider it music. There is not necessarily a time signature, though composers of concrete music probably have time in mind. Rhythmic figures may arise from repetition of certain sounds, or from the precise combination of sounds. There are no pitches or notes in the traditional sense. Many sounds we here are on a continuous spectrum of frequency. In other words, we don't live in a monotone world. But many things we hear we don't register as notes because they have not been clearly defined as definite pitches. Sirens, for example, are like one long glissando, up and down; but few people hear notes in a siren because there are not separate sounds to distinguish as different pitches.
Despite lack of time signature and pitches, concrete music is still musically appealing to some. A concrete music composer works to expand our definition of music by presenting a new collection of sounds to appeal to us like music does: there is structure; there is a beginning, a middle, and an end; there are sounds that may be foreign to music but make sense together.
I decided I wanted to use laughing sounds to create my composition. I did a Google search and found many websites with free downloadable sound effects. One of the most comprehensive websites was PartnersInRhyme.com. They also offer to give you a $50 library of sound effects if you mention them in your blog or on your website. I'll let you know how that turns out...
Monday, October 20, 2008
The Power of Photoshop
I am usually satisfied with my photos after I've taken them. That may be the result of growing up with film cameras; you can't do much to the picture after you get it developed. So in the years that I've been using digital cameras, I have done little more than upload my pictures to my computer. I view them using whatever photo-viewing program my computer came with. Sometimes I would crop photos because I wasn't totally satisfied with the angle or the composition, but that was the extent of my image editing.
For Tech Trends, though, I have to use Photoshop. We're not doing anything really complicated yet; most of the assignments are using pre-determined filters. But as I scroll through the available tools, I see what enormous capabilities Photoshop has! You don't have to use the pre-set functions. You can tweak every bit of color, brightness, change the viewpoint, create panoramas. The possibilities are endless.
So this is why some people can spend hours editing pictures! I see now how useful it can be. One can edit pictures to highlight certain attributes of one event. That will create an ideal digital representation of the event. This could be for posterity, for nostalgia, for whatever you want it to be.
Implications for teaching? For one thing, a teacher could process images from a concert, for the school's historical archives, for advocacy, or for recruitment. If the images were meant for a yearbook or a scrapbook, one may want to do little editing of the pictures. Perhaps just change brightness and color here and there to make the photographs clearer. But if the images were to be used in a campaign to recruit more students, one could use the various filters and other tools to make the pictures eye-catching. One could create very attractive posters using Photoshop.
Teachers could also consider using Photoshop in a classroom. I don't think I will use it in a music classroom; I think Photoshop is better suited for a visual arts class. I would consider using Photoshop in an electronic music class, as part of an audio/visual assignment. I can see high school students creating a movie and composing a soundtrack to go with it. But my dream job is a beginning band class. I can't see ever using Photoshop in that context. It will be useful to me to recruite and advocate for my program, but not for my students to use directly, unless they volunteer to help me edit photos outside of class.
For Tech Trends, though, I have to use Photoshop. We're not doing anything really complicated yet; most of the assignments are using pre-determined filters. But as I scroll through the available tools, I see what enormous capabilities Photoshop has! You don't have to use the pre-set functions. You can tweak every bit of color, brightness, change the viewpoint, create panoramas. The possibilities are endless.
So this is why some people can spend hours editing pictures! I see now how useful it can be. One can edit pictures to highlight certain attributes of one event. That will create an ideal digital representation of the event. This could be for posterity, for nostalgia, for whatever you want it to be.
Implications for teaching? For one thing, a teacher could process images from a concert, for the school's historical archives, for advocacy, or for recruitment. If the images were meant for a yearbook or a scrapbook, one may want to do little editing of the pictures. Perhaps just change brightness and color here and there to make the photographs clearer. But if the images were to be used in a campaign to recruit more students, one could use the various filters and other tools to make the pictures eye-catching. One could create very attractive posters using Photoshop.
Teachers could also consider using Photoshop in a classroom. I don't think I will use it in a music classroom; I think Photoshop is better suited for a visual arts class. I would consider using Photoshop in an electronic music class, as part of an audio/visual assignment. I can see high school students creating a movie and composing a soundtrack to go with it. But my dream job is a beginning band class. I can't see ever using Photoshop in that context. It will be useful to me to recruite and advocate for my program, but not for my students to use directly, unless they volunteer to help me edit photos outside of class.
Wednesday, October 15, 2008
Live Performance vs. Recorded Music
Recently I was reading my weekly assignment for a class, and came across a reference to Glenn Gould renouncing live performance. I don't know much about Glenn Gould, and couldn't believe a musician would actually give up live performing and perform only in a recording studio, so I researched him a bit. Apparently this is true; in 1964 Gould gave his last live performance, Since then he has dedicated himself to recordings and broadcasts for radio and television.
This brings me to my dilemma with his decision: isn't a main characteristic of music the live interaction between performer and listener? My favorite part of performing is playing with other people in an ensemble, but a very close second is performing for other people. The presence of an audience is one of the main factors in performance. Of course there is an audience for recorded music; the listener to a stereo, or an mp3 player, or a radio broadcast. But that listener is not getting as authentic a performance as she would if she were present in a concert hall. She will not observe the performer's movements, a key to his expression. She will not witness firsthand the judgments and decisions made during the performance and the interactions between players on stage.
Another disadvantage to recorded music is the quality of sound. The size and shape of a concert hall affect the sounds produced, and often enhance a performance. A member of a live audience will notice and appreciate variations in sound from their particular spot in the hall. The bassoon may sound like it is right next to her, or she may hear a very bright string section. In a recording, the best technology can only approximate the sound of live performance. The recording engineers add reverb to more closely imitate the concert hall. Unfortunately the microphones do not always pick up the lasting harmonics, echoes, and vibrations one hears and feels in live performance. And the quality of the listener's speakers will greatly affect the listening experience. The recorded performance is only as good as the technology used to record and play it.
The one reason I can concur with Gould is the advantage of editing. In a recording studio, the performer has supreme control over most musical aspects (except those I mentioned above). Gould feels live performance is like a competitive sporting event. There is too much risk and stress to give a really musical performance. In a studio he is free to fix mistakes and splice takes as necessary. This is a great tool for musicians, but I don't think it leads to the conclusion that recorded music is the ideal format for performances. Recordings preserve music as history for us, like an audio archive. But music is temporal; the performer does things on stage emotionally and musically that cannot be reproduced in a studio, and the audience should experience these firsthand.
I understand Gould's point of view, but I don't agree with it. I will make recordings to preserve my own musical history, and continue to listen to others' recordings for research, for education, and for inspiration. But I believe the most authentic musical experience is live performance.
This brings me to my dilemma with his decision: isn't a main characteristic of music the live interaction between performer and listener? My favorite part of performing is playing with other people in an ensemble, but a very close second is performing for other people. The presence of an audience is one of the main factors in performance. Of course there is an audience for recorded music; the listener to a stereo, or an mp3 player, or a radio broadcast. But that listener is not getting as authentic a performance as she would if she were present in a concert hall. She will not observe the performer's movements, a key to his expression. She will not witness firsthand the judgments and decisions made during the performance and the interactions between players on stage.
Another disadvantage to recorded music is the quality of sound. The size and shape of a concert hall affect the sounds produced, and often enhance a performance. A member of a live audience will notice and appreciate variations in sound from their particular spot in the hall. The bassoon may sound like it is right next to her, or she may hear a very bright string section. In a recording, the best technology can only approximate the sound of live performance. The recording engineers add reverb to more closely imitate the concert hall. Unfortunately the microphones do not always pick up the lasting harmonics, echoes, and vibrations one hears and feels in live performance. And the quality of the listener's speakers will greatly affect the listening experience. The recorded performance is only as good as the technology used to record and play it.
The one reason I can concur with Gould is the advantage of editing. In a recording studio, the performer has supreme control over most musical aspects (except those I mentioned above). Gould feels live performance is like a competitive sporting event. There is too much risk and stress to give a really musical performance. In a studio he is free to fix mistakes and splice takes as necessary. This is a great tool for musicians, but I don't think it leads to the conclusion that recorded music is the ideal format for performances. Recordings preserve music as history for us, like an audio archive. But music is temporal; the performer does things on stage emotionally and musically that cannot be reproduced in a studio, and the audience should experience these firsthand.
I understand Gould's point of view, but I don't agree with it. I will make recordings to preserve my own musical history, and continue to listen to others' recordings for research, for education, and for inspiration. But I believe the most authentic musical experience is live performance.
Wednesday, October 8, 2008
Initial Thoughts
Using the internet as a means of expression is a new concept for me. Until now I used it strictly for e-mail, reading news, and wasting time (e.g. Facebook). I have recently discovered, though, that this medium is valuable not only for social and personal functions.
One mind-boggling idea is that the internet is an instrument, and your computer the performance venue. As I am learning how to embed audio files in websites, I also discover that because of the variety of available monitors, modem speeds, and file sizes, my website will appear differently to different users. This can be interpreted as viewing different performances of the same material.
Now, I wouldn't go so far as to say your computer is a musician. The creator of the website controls the raw material to the best of her ability. One can only provide the instructions up to a point. The actual display of web material is mostly left up to the viewer's computer and internet connection. It's like a composer-performer relationship; composers write as much as they can into the music, but the performer brings the music to the audience. That performance involves much more thinking and acting in the performance, though; a computer will not respond to the music in relation to years and years of performance experience or in relation to the greater practice of music. But it is an interesting new perspective for me to think about "performances" of web material.
So for those who think "Technological Trends in Music Education" is going to be all about learning MIDI and how to use Finale, consider this your warning: you're actually joining an ensemble.
One mind-boggling idea is that the internet is an instrument, and your computer the performance venue. As I am learning how to embed audio files in websites, I also discover that because of the variety of available monitors, modem speeds, and file sizes, my website will appear differently to different users. This can be interpreted as viewing different performances of the same material.
Now, I wouldn't go so far as to say your computer is a musician. The creator of the website controls the raw material to the best of her ability. One can only provide the instructions up to a point. The actual display of web material is mostly left up to the viewer's computer and internet connection. It's like a composer-performer relationship; composers write as much as they can into the music, but the performer brings the music to the audience. That performance involves much more thinking and acting in the performance, though; a computer will not respond to the music in relation to years and years of performance experience or in relation to the greater practice of music. But it is an interesting new perspective for me to think about "performances" of web material.
So for those who think "Technological Trends in Music Education" is going to be all about learning MIDI and how to use Finale, consider this your warning: you're actually joining an ensemble.
Subscribe to:
Posts (Atom)