To ease into the murky waters of controlling animation with audio and expressions, we'll look at something simple. All we want to accomplish here is to stretch a layer in response to the amplitude of an audio signal.
Since expressions can't access audio data directly, we have to add an intermediate step. We first have to select our audio layer and use the keyframe assistant Convert Audio to Keyframes.
This will create an new layer named "Audio Amplitude". If the audio of the original layer is stereo, the new layer will have three expression control sliders applied: "Left Channel", "Right Cannel", and "Both Channels". At each frame of each slider, a keyframe representing the audio amplitude will be created. The following illustration shows the relationship between the original audio waveform and the resulting audio amplitude "envelope" created by the keyframe assistant.
Note that the audio amplitude values will range from zero (silence) to some peak value which depends greatly on the nature of the audio source. The range of values generated will almost certainly not be suitable for what we have mind, so we'll have to use one of the interpolation methods to translate the values to a more useful range.
Let's say that we want our layer's scale value to stay at its pre-expression value when the audio amplitude is zero, and increase by a factor of two when the audio is at its peak. Looking at the graph of the "Both Channels" slider, we can see that the peak amplitude is about 15 and the minimum amplitude is near zero. Since we want our scale value to double at the audio peaks, we can use the linear() interpolation method to map our audio amplitude range into our desired scale multiplier range.
Here's an expression will double a layer's scale value as the audio amplitude varies from zero to 15:
minAudio = 0;
maxAudio = 15;
maxStretch = 2.0;
audioLev = thisComp.layer("Audio Amplitude").effect("Both Channels")("Slider");
stretch = linear(audioLev, minAudio, maxAudio, 1.0, maxStretch);
value * stretch