您當前的位置:首頁 > 舞蹈

筆記:可能是最好的WebAudio入門教程 Teropa:Learn Web Audio from the Ground Up

作者:由 葉梓濤 發表于 舞蹈時間:2021-06-25

這是一套關於web audio使用入門的四篇文章的筆記,教授使用web-audio在瀏覽器中製造聲音,其中涉及到的更多是關於聲音的分解以及合成器的某些基礎知識,非常適合入門,作者還有另外關於Generative Music的內容,之後我會進行補充。

同主題內容可見

筆記 Tepora:How Generative Music Works 生成音樂是如何工作的

筆記:GDC2018 Serialism & Sonification in Mini Metro

譯介:Brian Eno 生成性音樂 Generative Music “Evolving metaphors, in my opinion, is what artists do。”

以下為目錄

筆記:可能是最好的WebAudio入門教程 Teropa:Learn Web Audio from the Ground Up

00 What is the web audio api 什麼是Web音訊API

https://

teropa。info/blog/2016/0

8/19/what-is-the-web-audio-api。html

使用 Web Audio API,你可以在任何 Web 應用程式中建立和處理聲音,就在瀏覽器內部。

主要是從Api的角度描述了web audio是如何構建出輸入-處理-輸出這樣的內容的

https://www。

w3。org/TR/webaudio-usec

ases/

可以用作遊戲、音樂、以及視聽audiovisual art等

Web Audio API 的核心是許多不同的音訊輸入、處理器和輸出,您可以將它們組合成一個音訊圖形,從而建立所需的聲音。At the heart of the Web Audio API is a number of different audio inputs, processors, and outputs, which you can combine into an

audio graph

音訊圖形 that creates the sound you need。

筆記:可能是最好的WebAudio入門教程 Teropa:Learn Web Audio from the Ground Up

圖片來自作者網站

Input

Buffer Sources 緩衝源:當你有事先錄製好的聲音樣本放在一個檔案中(比如。Mp3或者 wav。你可以用 XHR 把它載入到你的應用程式中,解碼成 AudioBuffer,然後用 AudioBufferSourceNode 播放它

Media Element Sources 從

Media Stream Sources 用Media Streams 獲取本地麥克風輸入,傳輸提供給 Media Stream Audio SourceNode

Oscillators 振盪器 You can use oscillators to make continuous sine, square, sawtooth, triangle and custom-shaped waves。

Processing

Gains 增益

You can control

volume

with GainNodes。 Just set the volume level, or make it dynamic with fade ins/outs or tremolo effects。

Filters 濾波器

Filters allow you to adjust the volume of a specific audio frequency range。 BiquadFilterNode and IIRFilterNode support many different kinds of frequency responses。 These are the building blocks for a great variety of things: Cleaning up sound, making equalizers, synth filter sweeps, and wah-wah effects。

Delays 延遲

DelayNodes allow you to hold on to an audio signal for a moment before feeding it forward。 You can make echoes and other of delay-based effects。 Also useful for adjusting audio-video sync。

Stereo Panning 立體聲相

With StereoPannerNode you can move sounds around in the stereo field: To the left ear, right ear, or somewhere in between。

3D Spatialization 空間化

With PannerNode you can move sounds around not just in the stereo field, but in 3D space。 Make things louder or quieter based on whether they‘re “near” or “far” and whether they’re projecting sound in your direction or not。 Very useful for games, virtual reality apps, and other apps where you need to match the positions of sound sources with visuals。

Convolution Reverb 卷積混響

The ConvolverNode lets you make things sound like you‘re in a physical space like a room, a music hall, or a cave。 There’s no shortage of options given the huge amount of impulse responses freely available online。 In games you can emulate different physical spaces。 In music you can add ambience or emulate vintage gear。

Distortion 失真

You can distort sounds using WaveShaperNodes。 Mostly useful in music, where distortion can be used in many kinds of effects, ranging from an “analog warmth” to a filthy overdrive。

Compression 壓縮

You can use a DynamicsCompressorNode to reduce the

dynamic range

of your soundscape by making loud sounds quieter and quiet sounds louder。 Useful in situations where certain sounds may pile up and overtake everything else, or conversely not stand out enough。

Custom Effects: AudioWorklets 自定義

When none of the built-in processors does what you need, you can make an

AudioWorklet

and do arbitrary real-time audio processing in JavaScript。 An AudioWorklet is similar to a Web Worker in that they run in a separate context from the rest of your app。 But unlike Web Workers, they all run on the audio thread with the rest of the audio processing。

This is a new API and not really available yet。 Current apps need to use the deprecated ScriptProcessorNode instead。

Channel Splitting & Merging 通道分隔合併

When you‘re working with stereo or surround sound, you have several sound channels (e。g。 left and right)。 Usually all of the channels go through the same nodes in the graph, but you can also process them separately。 Split the channels, route them to separate processing nodes, and then merge them back again。

Analysis & Visualization 分析與視覺化

With AnalyserNode you can get real-time read-only access to audio streams。 You can build visualizations on top of this, ranging from simple waveforms to all kinds of artistic visual effects。

Output 輸出

Speakers

The most common and most obvious destination of your Web Audio graph is the user’s speakers or headphones。

This is the default AudioDestinationNode of any Web Audio context。

Stream Destinations Like its stream source counterpart, MediaStreamDestinationNode provides access to WebRTC MediaStreams。

You can send your audio output to a remote WebRTC peer, broadcast to many such peers, or just record the audio to a local file。

Buffer Destinations You can construct an OfflineAudioContext to build a whole Web Audio graph that outputs to an AudioBuffer instead of the device‘s speakers。 You can then send that AudioBuffer somewhere or use it as a source in another Web Audio graph。

These kinds of audio contexts will try to process audio faster than real time so that you get the result buffer as quickly as possible。 They can be useful for “prerendering” expensive sound effects and then playing them back multiple times。 This is similar to rendering a complex visual scene on a and then drawing that canvas to another one multiple times。

構建圖表 Audio Graph

筆記:可能是最好的WebAudio入門教程 Teropa:Learn Web Audio from the Ground Up

您可以使用 Web Audio JavaScript api 構建這樣的圖表。首先建立 AudioContext。這是 Web 音訊的主要入口點。它支援你的音訊圖形,建立一個音訊處理後臺執行緒,並開啟一個系統音訊流:

let

audioCtx

=

new

AudioContext

();

然後就可以建立內容

// Establish an AudioContext

let

audioCtx

=

new

AudioContext

();

// Create the nodes of your audio graph

let

sourceLeft

=

audioCtx

createBufferSource

();

let

sourceRight

=

audioCtx

createBufferSource

();

let

pannerLeft

=

audioCtx

createStereoPanner

();

let

pannerRight

=

audioCtx

createStereoPanner

();

// Set parameters on the nodes

sourceLeft

buffer

=

myBuffer

sourceLeft

loop

=

true

sourceRight

buffer

=

myBuffer

sourceRight

loop

=

true

sourceRight

playbackRate

value

=

1。002

pannerLeft

pan

value

=

-

1

pannerRight

pan

value

=

1

// Make connections between the nodes, ending up in the destination output

sourceLeft

connect

pannerLeft

);

sourceRight

connect

pannerRight

);

pannerLeft

connect

audioCtx

destination

);

pannerRight

connect

audioCtx

destination

);

// Start playing the input nodes

sourceLeft

start

0

);

sourceRight

start

0

);

雖然你可以在普通的 JavaScript 中構造節點並設定它們的引數,但是音訊處理本身並不會在 JavaScript 中發生。在你設定好圖形之後,瀏覽器將開始在一個單獨的音訊執行緒中處理音訊,並使用高度最佳化的、平臺特定的 c + + 和彙編程式碼來完成。

If you like experimental music and want to learn Web Audio by hacking on something fun, my JavaScript Systems Music tutorial might be of interest to you。 作者的另外一個教程,計劃過段時間整理出筆記內容

MDN is my go-to source when I’m trying to figure out a particular feature of the API。 It has documentation for all the nodes and parameters as they are currently implemented in browsers。 When you‘re dealing with binary audio data buffers directly, you may also find their documentation of typed arrays very useful。 MDN作為查閱特定API

The relevant specs are a good way to dive into how a specific feature actually works:

There’s the Web Audio spec itself。

Streams are specified in the Media Capture and Streams spec。

The Web MIDI spec is useful when you want to support MIDI connected devices in your apps。

The Web Audio Weekly newsletter from Chris Lowis is nice if you want to get a semi-regular email digest of what‘s going on with Web Audio。 (There’s also a great talk on Youtube he‘s given about exploring the history of sound synthesis with Web Audio。)

If you just want to see some cool stuff, check out Google’s Chrome Music Lab and Jake Albaugh‘s Codepens。

比如 Musical Chord Progression Arpeggiator

01 訊號和正弦波

https://

teropa。info/blog/2016/0

8/04/sine-waves。html

主要普及了正弦波,以及一個一般的audio buffer是儲存怎麼樣的一個數組的,並且嘗試用JavaScript來向我們展示了一種效率低下的方法便於理解,然後再引入了內生的正弦波的節點。

let

audioContext

=

new

AudioContext

();

手動填充正弦波2s

const

REAL_TIME_FREQUENCY

=

440

const

ANGULAR_FREQUENCY

=

REAL_TIME_FREQUENCY

*

2

*

Math

PI

let

audioContext

=

new

AudioContext

();

let

myBuffer

=

audioContext

createBuffer

1

88200

44100

);

let

myArray

=

myBuffer

getChannelData

0

);

for

let

sampleNumber

=

0

sampleNumber

<

88200

sampleNumber

++

{

myArray

sampleNumber

=

generateSample

sampleNumber

);

}

function

generateSample

sampleNumber

{

let

sampleTime

=

sampleNumber

/

44100

let

sampleAngle

=

sampleTime

*

ANGULAR_FREQUENCY

return

Math

sin

sampleAngle

);

}

let

src

=

audioContext

createBufferSource

();

src

buffer

=

myBuffer

src

connect

audioContext

destination

);

src

start

();

OscillatorNode

is much, much more efficient。 It does its work in native browser code (C++ or Assembly) rather than in JavaScript。

const

REAL_TIME_FREQUENCY

=

440

let

audioContext

=

new

AudioContext

();

let

myOscillator

=

audioContext

createOscillator

();

myOscillator

frequency

value

=

REAL_TIME_FREQUENCY

myOscillator

connect

audioContext

destination

);

myOscillator

start

();

myOscillator

stop

audioContext

currentTime

+

2

);

// Stop after two seconds

02 控制頻率和音調

https://

teropa。info/blog/2016/0

8/10/frequency-and-pitch。html

What’s The Relationship between Frequency and Pitch? 音訊與音高的關係是什麼?

How About Musical Notes? 音符怎麼樣?

// Reference note:

const

A4

=

440

// Octave jumps:

const

A5

=

A4

*

Math

pow

2

1

);

// Same as 440 * 2

const

A6

=

A4

*

Math

pow

2

2

);

// Same as 440 * 2 * 2

const

A3

=

A4

*

Math

pow

2

-

1

);

// Same as 440 / 2

const

A2

=

A4

*

Math

pow

2

-

2

);

// Same as 440 / 2 / 2

// Single note jumps:

const

B4

=

A4

*

Math

pow

2

1

/

12

);

const

C5

=

A4

*

Math

pow

2

2

/

12

);

const

G4

=

A4

*

Math

pow

2

-

1

/

12

);

const

F4

=

A4

*

Math

pow

2

-

2

/

12

);

let

audioCtx

=

new

AudioContext

();

let

osc

=

audioCtx

createOscillator

();

osc

frequency

value

=

440

osc

frequency

setValueAtTime

440

*

Math

pow

2

1

/

12

),

audioCtx

currentTime

+

1

);

osc

frequency

setValueAtTime

440

*

Math

pow

2

2

/

12

),

audioCtx

currentTime

+

2

);

osc

connect

audioCtx

destination

);

osc

start

();

osc

stop

audioCtx

currentTime

+

3

);

How about Sliding from One Pitch to Another? 從一個音高滑到另一個音高怎麼樣?

let

audioCtx

=

new

AudioContext

();

Run

/

Edit

let

osc

=

audioCtx

createOscillator

();

osc

frequency

setValueAtTime

440

audioCtx

currentTime

);

osc

frequency

linearRampToValueAtTime

440

*

Math

pow

2

1

/

12

),

audioCtx

currentTime

+

1

);

osc

connect

audioCtx

destination

);

osc

start

();

osc

stop

audioCtx

currentTime

+

3

);

可以使用不同的函式製作不同的滑音感覺

const

G4

=

440

*

Math

pow

2

-

2

/

12

);

const

A4

=

440

const

F4

=

440

*

Math

pow

2

-

4

/

12

);

const

F3

=

440

*

Math

pow

2

-

16

/

12

);

const

C4

=

440

*

Math

pow

2

-

9

/

12

);

let

audioCtx

=

new

AudioContext

();

let

osc

=

audioCtx

createOscillator

();

let

t

=

audioCtx

currentTime

osc

frequency

setValueAtTime

G4

t

);

osc

frequency

setValueAtTime

G4

t

+

0。95

);

osc

frequency

exponentialRampToValueAtTime

A4

t

+

1

);

osc

frequency

setValueAtTime

A4

t

+

1。95

);

osc

frequency

exponentialRampToValueAtTime

F4

t

+

2

);

osc

frequency

setValueAtTime

F4

t

+

2。95

);

osc

frequency

exponentialRampToValueAtTime

F3

t

+

3

);

osc

frequency

setValueAtTime

F3

t

+

3。95

);

osc

frequency

exponentialRampToValueAtTime

C4

t

+

4

);

// setTimeout(()=>

// myOscillator。frequency。value *= Math。pow(2,1/12)

// ,audioContext。currentTime + 1);

// setTimeout(()=>

// myOscillator。frequency。value *= Math。pow(2,1/12)

// ,audioContext。currentTime + 2);

osc

connect

audioCtx

destination

);

osc

start

();

osc

stop

audioCtx

currentTime

+

5

);

// Stop after two seconds

03 控制振幅和響度

https://

teropa。info/blog/2016/0

8/30/amplitude-and-loudness。html

let

audioCtx

=

new

AudioContext

let

osc

=

audioCtx

createOscillator

();

let

gain

=

audioCtx

createGain

();

// Set parameters

osc

frequency

value

=

440

gain

gain

value

=

0。5

// Connect graph

osc

connect

gain

);

gain

connect

audioCtx

destination

);

// Schedule start and stop

osc

start

();

osc

stop

audioCtx

currentTime

+

2

);

How Do I Set Signal Amplitude in Web Audio? 如何設定 Web 音訊中的訊號幅度?

筆記:可能是最好的WebAudio入門教程 Teropa:Learn Web Audio from the Ground Up

What Are the Limits of Amplitude and What Happens If I Exceed Them? 振幅的極限是什麼? 如果超過這個極限會發生什麼?

筆記:可能是最好的WebAudio入門教程 Teropa:Learn Web Audio from the Ground Up

How Do I Control Amplitude Changes Over Time? 如何控制振幅隨時間的變化?

AudioParam。setTargetAtTime()

timeConstant

The time-constant value, given in seconds, of an exponential approach to the target value。 The larger this value is, the slower the transition will be。

What About Decibels? 那麼分貝呢?

關於分貝的這部分實在有點太複雜了,我暫時不想理解。

Decibels are always a

relative

measure。

There‘s an

exponential

relationship between the amplitude of a sound wave and its loudness in decibels。

The Relationship Between Amplitude and Decibels 振幅與分貝的關係

The reference point we usually use to measure sounds in the physical world is the Sound Pressure Level, or dB SPL

Decibels relative to full scale (dBFS), which is anchored on the maximum peak level possible in the system。

05 Additive Synthesis And the Harmonic Series 加法合成與泛音列

https://

teropa。info/blog/2016/0

9/20/additive-synthesis。html

What Happens When I Combine Two Or More Sine Waves? 當我結合兩個或兩個以上的正弦波會發生什麼? 【這邊推薦 3B1B 的影片《形象展示傅立葉變換》以及《如果看了這篇文章你還不懂傅立葉變換,那就過來掐死我吧》】

當我們想在 Web Audio 中進行加法合成時,幸運的是我們實際上不需要手動計算任何數字。該 API 允許將兩個或多個源節點連線到同一目的地。

let

audioCtx

=

new

AudioContext

();

let

osc1

=

audioCtx

createOscillator

();

let

osc2

=

audioCtx

createOscillator

();

let

osc3

=

audioCtx

createOscillator

();

let

masterGain

=

audioCtx

createGain

();

osc1

frequency

value

=

440

osc2

frequency

value

=

550

osc3

frequency

value

=

660

masterGain

gain

value

=

0。3

osc1

connect

masterGain

);

osc2

connect

masterGain

);

osc3

connect

masterGain

);

masterGain

connect

audioCtx

destination

);

osc1

start

();

osc2

start

();

osc3

start

();

What Is The Harmonic Series? 什麼是調和級數 / 泛音列?

筆記:可能是最好的WebAudio入門教程 Teropa:Learn Web Audio from the Ground Up

【這邊推薦3B1B的影片 《音樂與測度論有什麼關係?》】

當然,這只是我們可以使用的無限頻率組合中的一個。我們怎樣才能在這個無限頻率的空間裡找到有趣的聲音呢?一種方法是選擇一個頻率-任何頻率-然後產生該頻率的整數倍。例如,從440Hz 開始,我們得到以下系列:

440Hz

2 * 440Hz = 880Hz

3 * 440Hz = 1320Hz

4 * 440Hz = 1760Hz

5 * 440Hz = 2200Hz

6 * 440Hz = 2640Hz

。。。

頻率之間的這種關係之所以特殊,是因為它在物質世界中自然發生。例如,當我們撥動吉他或小提琴的弦時,它的振動頻率是由弦長(l)決定的,但與此同時,它也在一些更高的頻率上振動,這些頻率恰好是基於原始弦長的整數分割: l/2,l/3,l/4。。。

所以當你撥動一個絃樂器,你不會產生一個正弦波或任何純粹的單頻聲音。你正在產生一個跨越這些不同頻率的聲音組合——一個和聲泛音序列 You’re producing a combination of sounds across these different frequencies – a harmonic overtone series。 。世界上所有的自然聲音也是如此。

https://

alexanderchen。github。io

/harmonics/

一個視覺化的 泛音列

這邊推薦一個比較好的解釋影片《Nice Chord的 一次搞懂「泛音列」》,鋼琴的聲音並不是簡單的Do,而是很多個頻率疊加在一起的。Spectrum 《訊號頻域分析方法的理解》

筆記:可能是最好的WebAudio入門教程 Teropa:Learn Web Audio from the Ground Up

How Can I Synthesize Sounds Using The Harmonic Series? 如何使用調和級數合成聲音?

真正有趣的是,不同於我們之前創作的大三和絃,這次你實際上並沒有聽到三種不同的聲音。你只能聽到最低音 A4,但它聽起來不再只是一個正弦波!在你的大腦中發生的是一種叫做“融合 Fusion”的東西。你的聽覺系統結合了和諧相關的訊號,所以聽起來你所聽到的只是基本頻率 fundamental frequency 。和聲只是為聲音增添了“色彩 color”,改變了它的音色 timbre。

這個正是加法綜合的本質 This is the key idea behind additive synthesis。 By combining different harmonics (as well as non-harmonic frequencies) and varying their relationships over time, we can produce different kinds of sounds。

理論上可以用這樣的合成做出鋼琴和吉他的音色,但具體會非常複雜。

我們嘗試製作帶有特殊音色的聲音,除了我們之前看到的振盪器,這次我們還用到了單個增益,每個頻率一個增益。我們要混合不同振幅關係的調和分音,以得到我們想要的聲音。在這種情況下,我們對基波使用全振幅,然後對諧波分別使用0。1,0。2和0。5。

let

audioCtx

=

new

AudioContext

();

let

fundamental

=

audioCtx

createOscillator

();

let

overtone1

=

audioCtx

createOscillator

();

let

overtone2

=

audioCtx

createOscillator

();

let

overtone3

=

audioCtx

createOscillator

();

let

overtone1Gain

=

audioCtx

createGain

();

let

overtone2Gain

=

audioCtx

createGain

();

let

overtone3Gain

=

audioCtx

createGain

();

let

masterGain

=

audioCtx

createGain

();

fundamental

frequency

value

=

440

overtone1

frequency

value

=

880

overtone2

frequency

value

=

1320

overtone3

frequency

value

=

1760

overtone1Gain

gain

value

=

0。1

overtone2Gain

gain

value

=

0。2

overtone3Gain

gain

value

=

0。5

masterGain

gain

value

=

0。3

fundamental

connect

masterGain

);

overtone1

connect

overtone1Gain

);

overtone2

connect

overtone2Gain

);

overtone3

connect

overtone3Gain

);

overtone1Gain

connect

masterGain

);

overtone2Gain

connect

masterGain

);

overtone3Gain

connect

masterGain

);

masterGain

connect

audioCtx

destination

);

fundamental

start

0

);

overtone1

start

0

);

overtone2

start

0

);

overtone3

start

0

);

這個是單音,若是我們將其中的進行抽象,打包,使用JavaScript中的

HarmonicSynth

let

audioCtx

=

new

AudioContext

();

class

HarmonicSynth

{

/**

* Given an array of overtone amplitudes, construct an additive

* synth for that overtone structure

*/

constructor

partialAmplitudes

{

this

partials

=

partialAmplitudes

map

(()

=>

audioCtx

createOscillator

());

this

partialGains

=

partialAmplitudes

map

(()

=>

audioCtx

createGain

());

this

masterGain

=

audioCtx

createGain

();

partialAmplitudes

forEach

((

amp

index

=>

{

this

partialGains

index

]。

gain

value

=

amp

this

partials

index

]。

connect

this

partialGains

index

]);

this

partialGains

index

]。

connect

this

masterGain

);

});

this

masterGain

gain

value

=

1

/

partialAmplitudes

length

}

connect

dest

{

this

masterGain

connect

dest

);

}

disconnect

()

{

this

masterGain

disconnect

();

}

start

time

=

0

{

this

partials

forEach

o

=>

o

start

time

));

}

stop

time

=

0

{

this

partials

forEach

o

=>

o

stop

time

));

}

setFrequencyAtTime

frequency

time

{

this

partials

forEach

((

o

index

=>

{

o

frequency

setValueAtTime

frequency

*

index

+

1

),

time

);

});

}

exponentialRampToFrequencyAtTime

frequency

time

{

this

partials

forEach

((

o

index

=>

{

o

frequency

exponentialRampToValueAtTime

frequency

*

index

+

1

),

time

);

});

}

}

const

G4

=

440

*

Math

pow

2

-

2

/

12

);

const

A4

=

440

const

F4

=

440

*

Math

pow

2

-

4

/

12

);

const

F3

=

440

*

Math

pow

2

-

16

/

12

);

const

C4

=

440

*

Math

pow

2

-

9

/

12

);

let

synth

=

new

HarmonicSynth

([

1

0。1

0。2

0。5

]);

let

t

=

audioCtx

currentTime

synth

setFrequencyAtTime

G4

t

);

synth

setFrequencyAtTime

G4

t

+

0。95

);

synth

exponentialRampToFrequencyAtTime

A4

t

+

1

);

synth

setFrequencyAtTime

A4

t

+

1。95

);

synth

exponentialRampToFrequencyAtTime

F4

t

+

2

);

synth

setFrequencyAtTime

F4

t

+

2。95

);

synth

exponentialRampToFrequencyAtTime

F3

t

+

3

);

synth

setFrequencyAtTime

F3

t

+

3。95

);

synth

exponentialRampToFrequencyAtTime

C4

t

+

4

);

synth

connect

audioCtx

destination

);

synth

start

();

synth

stop

audioCtx

currentTime

+

6

);

In musical programming environments people often talk about patches, which are basically preconfigured combinations of simpler audio primitives such as oscillators and gains with particular parameters。 Combining them into patches allows for easier reuse。 在音樂程式設計環境中,人們經常談論補丁 Patch,這些補丁基本上是簡單音訊原語的預先配置組合,比如振盪器和帶有特定引數的增益。將它們組合成補丁,可以更容易地重複使用。

鋸齒波和方波的製作

We can produce a sawtooth wave () by oscillating on every frequency of the harmonic series, with amplitudes that decrease as the frequencies increase: The first harmonic has an amplitude of 1/2 of the fundamental, the second has 1/3 and so on: 我們可以透過振盪產生鋸齒波,振幅隨頻率的增加而減小: 一次諧波的振幅為基波的1/2,二次諧波的振幅為基波的1/3

let

partials

=

[];

for

let

i

=

1

i

<=

45

i

++

{

partials

push

1

/

i

);

}

let

synth

=

new

HarmonicSynth

partials

);

synth

setFrequencyAtTime

440

audioCtx

currentTime

);

synth

connect

audioCtx

destination

);

synth

start

0

);

If we go and eliminate all the even harmonics from our sawtooth wave, we end up with a square wave () with yet another kind of timbre: 如果我們去消除鋸齒波中的所有偶次諧波,我們最終得到的是另一種音色的方波() :

let

partials

=

[];

for

let

i

=

1

i

<=

45

i

++

{

if

i

%

2

!==

0

{

partials

push

1

/

i

);

}

}

let

synth

=

new

HarmonicSynth

partials

);

synth

setFrequencyAtTime

440

audioCtx

currentTime

);

synth

connect

audioCtx

destination

);

synth

start

0

);

之前作者也提到可以透過超越振幅的方式來強行製造出方波

https://

teropa。info/blog/2016/0

8/30/amplitude-and-loudness。html#what-are-the-limits-of-amplitude-and-what-happens-if-i-exceed-them

當然實際上都有內建的型別 OscillatorNode。type

作者tepora製作了一個互動式的可以看到正弦波、方波、鋸齒波的疊加 Harmonics Explorer

筆記:可能是最好的WebAudio入門教程 Teropa:Learn Web Audio from the Ground Up

Beating 振動

combine sound waves that have almost but not quite the same frequency。 The interference pattern between the waves changes gradually over time as they fall in and out of phase: 透過相近但不完全一樣的聲波進行合成,其中的interference會隨著時間而放大並且進入迴圈。

let

audioCtx

=

new

AudioContext

();

let

osc1

=

audioCtx

createOscillator

();

let

osc2

=

audioCtx

createOscillator

();

let

gain

=

audioCtx

createGain

();

osc1

frequency

value

=

330

osc2

frequency

value

=

331

gain

gain

value

=

0。5

osc1

connect

gain

);

osc2

connect

gain

);

gain

connect

audioCtx

destination

);

osc1

start

();

osc2

start

();

osc1

stop

audioCtx

currentTime

+

20

);

osc2

stop

audioCtx

currentTime

+

20

);

This effect is called beating。 It is a result of the gradually changing wave phases shifting between reinforcing each other and canceling each other。 The resulting effect is pretty remarkable, and we can hear it by setting up oscillators that have nearly the same frequency。 For example, using frequencies 330 and 330。2 we get a sound that comes and goes every few seconds:

這種效應稱為跳動。這是波浪相位在相互補充和相互抵消之間逐漸變化的結果。由此產生的效果非常顯著,我們可以透過設定幾乎相同頻率的振盪器來聽到它。例如,使用頻率330和330。2,我們得到一個聲音,每隔幾秒來去:

筆記:可能是最好的WebAudio入門教程 Teropa:Learn Web Audio from the Ground Up

wiki

可以透過混合更多的聲音來造成音高和音色的雙重變化。

let

audioCtx

=

new

AudioContext

();

let

osc1

=

audioCtx

createOscillator

();

let

osc2

=

audioCtx

createOscillator

();

let

gain1

=

audioCtx

createGain

();

let

osc3

=

audioCtx

createOscillator

();

let

osc4

=

audioCtx

createOscillator

();

let

gain2

=

audioCtx

createGain

();

let

osc5

=

audioCtx

createOscillator

();

let

osc6

=

audioCtx

createOscillator

();

let

gain3

=

audioCtx

createGain

();

let

masterGain

=

audioCtx

createGain

();

osc1

frequency

value

=

330

osc2

frequency

value

=

330。2

gain1

gain

value

=

0。5

osc3

frequency

value

=

440

osc4

frequency

value

=

440。33

gain2

gain

value

=

0。5

osc5

frequency

value

=

587

osc6

frequency

value

=

587。25

gain3

gain

value

=

0。5

masterGain

gain

value

=

0。5

osc1

connect

gain1

);

osc2

connect

gain1

);

gain1

connect

masterGain

);

osc3

connect

gain2

);

osc4

connect

gain2

);

gain2

connect

masterGain

);

osc5

connect

gain3

);

osc6

connect

gain3

);

gain3

connect

masterGain

);

masterGain

connect

audioCtx

destination

);

osc1

start

();

osc2

start

();

osc3

start

();

osc4

start

();

osc5

start

();

osc6

start

();

2021/6/25

落日間出品

標簽: audioCtx  FREQUENCY  Connect  value  start