[DOC] update sample to the new doxy methode
This commit is contained in:
@@ -1,41 +0,0 @@
|
||||
=?=RIVER: Bases =?=
|
||||
__________________________________________________
|
||||
[right][tutorial[000_Build | Next: Tutorals]][/right]
|
||||
|
||||
=== Overview:===
|
||||
|
||||
===User requires:===
|
||||
To use ewol you need to know only C++ language. It could be usefull to know:
|
||||
:** [b]Python[/b] for all build tool.
|
||||
:** [b]git[/b] for all version management
|
||||
:** [b]Audio[/b] Basic knowlege of audio streaming af data organisation.
|
||||
|
||||
=== Architecture:===
|
||||
River has been designed to replace the pulseAudio basic asyncronous interface that create
|
||||
more problem that it will solve. The second point is that is not enougth portable to be
|
||||
embended in a proprietary software without distributing all the sources (Ios).
|
||||
|
||||
Start at this point we will have simple objectives :
|
||||
:** manage multiple Low level interface: (done by the [lib[airtaudio | AirTAudio]] interface):
|
||||
::** for linux
|
||||
:::** Alsa
|
||||
:::** Pulse
|
||||
:::** Oss
|
||||
::** for Mac-OsX
|
||||
:::** CoreAudio
|
||||
::** for IOs
|
||||
:::** CoreAudio (embended version)
|
||||
::** for Windows
|
||||
:::** ASIO
|
||||
::** For Android
|
||||
:::** Java (JDK-6)
|
||||
:** Synchronous interface ==> no delay and reduce latency
|
||||
:** Manage the thread priority (need sometimes to be more reactive)
|
||||
:** manage mixing of some flow (2 inputs stereo and the user want 1 input quad)
|
||||
:** AEC Acoustic Echo Cancelation (TODO : in the current implementation we have a simple sound cutter)
|
||||
:** Equalizer (done with [lib[drain | Drain])
|
||||
:** Resmpling (done by the libspeexDSP)
|
||||
:** Correct volume management (and configurable)
|
||||
:** Fade-in and Fade-out (done with [lib[drain | Drain])
|
||||
:** Channel reorganisation (done with [lib[drain | Drain])
|
||||
:** A correct feedback interface
|
100
doc/build.md
Normal file
100
doc/build.md
Normal file
@@ -0,0 +1,100 @@
|
||||
Build lib & build sample {#audio_river_build}
|
||||
========================
|
||||
|
||||
@tableofcontents
|
||||
|
||||
Download: {#audio_river_build_download}
|
||||
=========
|
||||
|
||||
ege use some tools to manage source and build it:
|
||||
|
||||
need google repo: {#audio_river_build_download_repo}
|
||||
-----------------
|
||||
|
||||
see: http://source.android.com/source/downloading.html#installing-repo
|
||||
|
||||
On all platform:
|
||||
```{.sh}
|
||||
mkdir ~/.bin
|
||||
PATH=~/.bin:$PATH
|
||||
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/.bin/repo
|
||||
chmod a+x ~/.bin/repo
|
||||
```
|
||||
|
||||
On ubuntu
|
||||
```{.sh}
|
||||
sudo apt-get install repo
|
||||
```
|
||||
|
||||
On archlinux
|
||||
```{.sh}
|
||||
sudo pacman -S repo
|
||||
```
|
||||
|
||||
lutin (build-system): {#audio_river_build_download_lutin}
|
||||
---------------------
|
||||
|
||||
```{.sh}
|
||||
pip install lutin --user
|
||||
# optionnal dependency of lutin (manage image changing size for application release)
|
||||
pip install pillow --user
|
||||
```
|
||||
|
||||
|
||||
dependency: {#audio_river_build_download_dependency}
|
||||
-----------
|
||||
|
||||
```{.sh}
|
||||
mkdir -p WORKING_DIRECTORY/framework
|
||||
cd WORKING_DIRECTORY/framework
|
||||
repo init -u git://github.com/atria-soft/manifest.git
|
||||
repo sync -j8
|
||||
cd ../..
|
||||
```
|
||||
|
||||
sources: {#audio_river_build_download_sources}
|
||||
--------
|
||||
|
||||
They are already download in the repo manifest in:
|
||||
|
||||
```{.sh}
|
||||
cd WORKING_DIRECTORY/framework/atria-soft/audio-river
|
||||
```
|
||||
|
||||
Build: {#audio_river_build_build}
|
||||
======
|
||||
|
||||
you must stay in zour working directory...
|
||||
```{.sh}
|
||||
cd WORKING_DIRECTORY
|
||||
```
|
||||
|
||||
library: {#audio_river_build_build_library}
|
||||
--------
|
||||
|
||||
```{.sh}
|
||||
lutin -mdebug audio-river
|
||||
```
|
||||
|
||||
Sample: {#audio_river_build_build_sample}
|
||||
-------
|
||||
|
||||
```{.sh}
|
||||
lutin -mdebug audio-river-sample-read?run
|
||||
lutin -mdebug audio-river-sample-write?run
|
||||
```
|
||||
|
||||
A fast way:
|
||||
```{.sh}
|
||||
lutin -mdebug audio-river-*
|
||||
```
|
||||
|
||||
|
||||
Run sample: {#audio_river_build_run_sample}
|
||||
===========
|
||||
|
||||
in distinct bash:
|
||||
```{.sh}
|
||||
lutin -mdebug audio-river-sample-read?run
|
||||
lutin -mdebug audio-river-sample-write?run
|
||||
```
|
77
doc/configFile.md
Normal file
77
doc/configFile.md
Normal file
@@ -0,0 +1,77 @@
|
||||
River configuration file {#audio_river_config_file}
|
||||
========================
|
||||
|
||||
@tableofcontents
|
||||
|
||||
Objectifs: {#audio_river_config_file_objectif}
|
||||
==========
|
||||
|
||||
- Understand the architecture of the configuration file.
|
||||
- all that can be done with it.
|
||||
|
||||
|
||||
Basis: {#audio_river_config_file_bases}
|
||||
======
|
||||
|
||||
The river configuration file is a json file. We use @ref ejson_mainpage_what to parse it then we have some writing facilities.
|
||||
|
||||
|
||||
River provide a list a harware interface and virtual interface.
|
||||
|
||||
|
||||
The hardware interface are provided by @ref audio_orchestra_mainpage_what then we will plug on every platform.
|
||||
|
||||
|
||||
The file is simply architecture around a list of object:
|
||||
|
||||
```{.json}
|
||||
{
|
||||
"speaker":{
|
||||
|
||||
},
|
||||
"microphone":{
|
||||
|
||||
},
|
||||
"mixed-in-out":{
|
||||
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
With this config we declare 3 interfaces : speaker, microphone and mixed-in-out.
|
||||
|
||||
|
||||
Harware configuration: {#audio_river_config_file_hw_config}
|
||||
======================
|
||||
|
||||
In every interface we need to define some Element:
|
||||
- "io" : Can be input/output/... depending of virtual interface...
|
||||
- "map-on": An object to configure airtaudio interface.
|
||||
- "frequency": 0 to automatic select one. Or the frequency to open harware device
|
||||
- "channel-map": List of all channel in the stream:
|
||||
* "front-left"
|
||||
* "front-center"
|
||||
* "front-right"
|
||||
* "rear-left"
|
||||
* "rear-center"
|
||||
* "rear-right"
|
||||
* "surround-left",
|
||||
* "surround-right",
|
||||
* "sub-woofer",
|
||||
* "lfe"
|
||||
- "type": Fomat to open the stream:
|
||||
* "auto": Detect the best type
|
||||
* "int8",
|
||||
* "int8-on-int16",
|
||||
* "int16",
|
||||
* "int16-on-int32",
|
||||
* "int24",
|
||||
* "int32",
|
||||
* "int32-on-int64",
|
||||
* "int64",
|
||||
* "float",
|
||||
* "double"
|
||||
- "nb-chunk": Number of chunk to open the stream.
|
||||
|
||||
|
||||
|
36
doc/faq.bb
36
doc/faq.bb
@@ -1,36 +0,0 @@
|
||||
=?= FAQ =?=
|
||||
|
||||
== What is ewol licence ==
|
||||
|
||||
This is really simple : APACHE-2 :
|
||||
|
||||
Copyright ewol Edouard DUPIN
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
[[http://www.apache.org/licenses/LICENSE-2.0]]
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
|
||||
|
||||
|
||||
== Why we use "DECLARE_FACTORY" Macro ? ==
|
||||
|
||||
For some reason!!! But everything might be clear:
|
||||
:** In ewol we masively use std::shared_ptr<xxx> (I have create my own but it is not "standard" (I like when we use genecic system)).
|
||||
:** The main class : [class[ewol::Object]] herited from [i]std::enable_shared_from_this<Object>[/i] to permit to access at his own [i]std::shared_ptr[/i].
|
||||
:** Acces At his own [i]std::shared_ptr[/i] is not allowed in the class contructor/destructor.
|
||||
:** Many time for meta-widget we need to propagate our [i]std::shared_ptr[/i] in child.
|
||||
|
||||
Then for all these reasons, I have create a simple MACRO that create a static template funtion that create the object and just after
|
||||
creation call the init(...) function to permit to create a complex widget or others with some writing convinience.
|
||||
|
||||
|
||||
|
@@ -1,25 +1,30 @@
|
||||
Read stream feedback {#audio_river_feedback}
|
||||
====================
|
||||
|
||||
=== Objectif ===
|
||||
:** Implement a feedback.
|
||||
@tableofcontents
|
||||
|
||||
=== Bases: ===
|
||||
Objectifs: {#audio_river_feedback_objectif}
|
||||
==========
|
||||
|
||||
- Implement a feedback.
|
||||
|
||||
Bases: {#audio_river_feedback_base}
|
||||
======
|
||||
|
||||
A feedback is a stream that is generated by an output.
|
||||
|
||||
To get a feedback this is the same implementation of an input and link it on an output.
|
||||
|
||||
|
||||
What change :
|
||||
What change:
|
||||
|
||||
[code style=c++]
|
||||
```{.cpp}
|
||||
//Get the generic feedback on speaker:
|
||||
interface = manager->createFeedback(48000,
|
||||
std::vector<audio::channel>(),
|
||||
audio::format_int16,
|
||||
"speaker");
|
||||
[/code]
|
||||
```
|
||||
|
||||
[note]
|
||||
Input interface does not provide feedback.
|
||||
[/note]
|
||||
**Note:** Input interface does not provide feedback.
|
||||
|
69
doc/index.bb
69
doc/index.bb
@@ -1,69 +0,0 @@
|
||||
== [center]RIVER library[/center] ==
|
||||
__________________________________________________
|
||||
|
||||
===What is RIVER, and how can I use it?===
|
||||
RIVER is a multi-platform library to manage the input and output audio flow.
|
||||
It can be compared with PulseAudio or Jack, but at the difference at the 2 interfaces
|
||||
it is designed to be multi-platform and is based on licence that permit to integrate it
|
||||
on every program we want.
|
||||
|
||||
===Where can I use it?===
|
||||
Everywhere! RIVER is cross-platform devolopped to support bases OS:
|
||||
: ** Linux (over Alsa, Pulseaudio, JackD)
|
||||
: ** Windows (over ASIO)
|
||||
: ** MacOs (over CoreAudio)
|
||||
: ** Android (Over Ewol wrapper little complicated need to be change later)
|
||||
: ** IOs (over CoreAudio for ios)
|
||||
|
||||
===What languages are supported?===
|
||||
RIVER is written in C++11 with posibilities to compile it with C++03 + Boost
|
||||
|
||||
===Are there any licensing restrictions?===
|
||||
RIVER is [b]FREE software[/b] and [i]all sub-library are FREE and staticly linkable !!![/i]
|
||||
|
||||
That allow you to use it for every program you want, including those developing proprietary software, without any license fees or royalties.
|
||||
|
||||
[note]The static support is important for some platform like IOs, and this limit the external library use at some license like :
|
||||
:** BSD*
|
||||
:** MIT
|
||||
:** APPACHE-2
|
||||
:** PNG
|
||||
:** ZLIB
|
||||
This exclude the classical extern library with licence:
|
||||
:** L-GPL
|
||||
:** GPL
|
||||
[/note]
|
||||
|
||||
==== License (APACHE 2) ====
|
||||
Copyright ewol Edouard DUPIN
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
[[http://www.apache.org/licenses/LICENSE-2.0]]
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
|
||||
==== Depends library: ====
|
||||
===== License: =====
|
||||
:** [b][lib[etk | e-tk]][/b] : APACHE-2
|
||||
:** [b][lib[airtaudio | airtaudio]][/b] : MIT/APACHE-2
|
||||
:** [b][lib[ejson | e-json]][/b] : APACHE-2
|
||||
:** [b][lib[drain | Drain]][/b] : APACHE-2
|
||||
|
||||
|
||||
===== Program Using RIVER =====
|
||||
:** [b][[http://play.google.com/store/apps/details?id=com.edouarddupin.worddown | worddown]][/b] : (Proprietary) Worddown is a simple word game threw [lib[ewolsa | ewol-simple-audio]].
|
||||
|
||||
== Main documentation: ==
|
||||
|
||||
[doc[001_bases | Global Documantation]]
|
||||
|
||||
[tutorial[000_Build | Tutorials]]
|
||||
|
92
doc/mainpage.md
Normal file
92
doc/mainpage.md
Normal file
@@ -0,0 +1,92 @@
|
||||
AUDIO-RIVER library {#mainpage}
|
||||
===================
|
||||
|
||||
@tableofcontents
|
||||
|
||||
What is AUDIO-RIVER: {#audio_river_mainpage_what}
|
||||
====================
|
||||
|
||||
AUDIO-RIVER, is a multi-platform library to manage the input and output audio flow.
|
||||
It can be compared with PulseAudio or Jack, but at the difference at the 2 interfaces
|
||||
it is designed to be multi-platform and is based on licence that permit to integrate it
|
||||
on every program we want.
|
||||
|
||||
|
||||
What it does: {#audio_river_mainpage_what_it_does}
|
||||
=============
|
||||
|
||||
Everywhere! RIVER is cross-platform devolopped to support bases OS:
|
||||
: ** Linux (over Alsa, Pulseaudio, JackD)
|
||||
: ** Windows (over ASIO)
|
||||
: ** MacOs (over CoreAudio)
|
||||
: ** Android (Over Ewol wrapper little complicated need to be change later)
|
||||
: ** IOs (over CoreAudio for ios)
|
||||
|
||||
AUDIO-RIVER is dependent of the STL (compatible with MacOs stl (CXX))
|
||||
|
||||
Architecture:
|
||||
-------------
|
||||
|
||||
River has been designed to replace the pulseAudio basic asyncronous interface that create
|
||||
more problem that it will solve. The second point is that is not enougth portable to be
|
||||
embended in a proprietary software without distributing all the sources (Ios).
|
||||
|
||||
Start at this point we will have simple objectives :
|
||||
- Manage multiple Low level interface: @ref audio_orchestra_mainpage_what
|
||||
* for linux (Alsa, Pulse, Oss)
|
||||
* for Mac-OsX (CoreAudio)
|
||||
* for IOs (coreAudio (embended version))
|
||||
* for Windows (ASIO)
|
||||
* For Android (Java (JDK...))
|
||||
- Synchronous interface ==> no delay and reduce latency
|
||||
- Manage the thread priority (need sometimes to be more reactive)
|
||||
- manage mixing of some flow (2 inputs stereo and the user want 1 input quad)
|
||||
- AEC Acoustic Echo Cancelation (TODO : in the current implementation we have a simple sound cutter)
|
||||
- Equalizer (done with @ref audio_drain_mainpage_what)
|
||||
- Resmpling (done by the libspeexDSP)
|
||||
- Correct volume management (and configurable)
|
||||
- Fade-in and Fade-out @ref audio_drain_mainpage_what
|
||||
- Channel reorganisation @ref audio_drain_mainpage_what
|
||||
- A correct feedback interface
|
||||
|
||||
|
||||
What languages are supported? {#audio_river_mainpage_language}
|
||||
=============================
|
||||
|
||||
AUDIO-RIVER is written in C++.
|
||||
|
||||
|
||||
Are there any licensing restrictions? {#audio_river_mainpage_license_restriction}
|
||||
=====================================
|
||||
|
||||
AUDIO-RIVER is **FREE software** and _all sub-library are FREE and staticly linkable !!!_
|
||||
|
||||
|
||||
License (APACHE-2.0) {#audio_river_mainpage_license}
|
||||
====================
|
||||
|
||||
Copyright AUDIO-RIVER Edouard DUPIN
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
<http://www.apache.org/licenses/LICENSE-2.0>
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
|
||||
Other pages {#audio_river_mainpage_sub_page}
|
||||
===========
|
||||
|
||||
- @ref audio_river_build
|
||||
- @ref audio_river_read
|
||||
- @ref audio_river_write
|
||||
- @ref audio_river_feedback
|
||||
- @ref audio_river_config_file
|
||||
- [**ewol coding style**](http://atria-soft.github.io/ewol/ewol_coding_style.html)
|
||||
|
115
doc/read.md
Normal file
115
doc/read.md
Normal file
@@ -0,0 +1,115 @@
|
||||
Read stream form Audio input {#audio_river_read}
|
||||
============================
|
||||
|
||||
@tableofcontents
|
||||
|
||||
Objectifs: {#audio_river_read_objectif}
|
||||
==========
|
||||
|
||||
- Understand basis of river
|
||||
- Create a simple recording interface that print the average of sample absolute value.
|
||||
|
||||
|
||||
When you will create an application based on the river audio interface you need :
|
||||
|
||||
Include: {#audio_river_read_include}
|
||||
========
|
||||
|
||||
Include manager and interface node
|
||||
@snippet read.cpp audio_river_sample_include
|
||||
|
||||
Initilize the River library: {#audio_river_read_init}
|
||||
============================
|
||||
|
||||
We first need to initialize etk sub library (needed to select the log level of sub-libraries and file access abstraction
|
||||
@snippet read.cpp audio_river_sample_init
|
||||
|
||||
Now we will initilaize the river library.
|
||||
To do this We have 2 posibilities:
|
||||
With a file:
|
||||
------------
|
||||
|
||||
```{.cpp}
|
||||
// initialize river interface
|
||||
river::init("DATA:configFileName.json");
|
||||
```
|
||||
|
||||
With a json string:
|
||||
-------------------
|
||||
|
||||
@snippet read.cpp audio_river_sample_read_config_file
|
||||
|
||||
```{.cpp}
|
||||
// initialize river interface
|
||||
river::initString(configurationRiver);
|
||||
```
|
||||
|
||||
For the example we select the second solution (faster to implement example and resource at the same position.
|
||||
|
||||
river::init / river::initString must be called only one time for all the application, this represent the hardware configuration.
|
||||
It is NOT dynamic
|
||||
|
||||
To understand the configuration file Please see @ref audio_river_config_file
|
||||
|
||||
This json is parsed by the @ref {#ejson_mainpage_what} it contain some update like:
|
||||
- Optionnal " in the name of element.
|
||||
- The possibilities to remplace " with '.
|
||||
|
||||
|
||||
|
||||
Get the river interface manager: {#audio_river_read_river_interface}
|
||||
================================
|
||||
|
||||
An application can have many interface and only one Manager. And a process can contain many application.
|
||||
|
||||
Then, we will get the first application manager handle.
|
||||
@snippet read.cpp audio_river_sample_get_interface
|
||||
|
||||
*Note:* You can get back the application handle when you create a new one with the same name.
|
||||
|
||||
Create your read interface: {#audio_river_read_river_read_interface}
|
||||
===========================
|
||||
|
||||
Generic code:
|
||||
@snippet read.cpp audio_river_sample_create_read_interface
|
||||
|
||||
Here we create an interface with:
|
||||
- The frequency of 48000 Hz.
|
||||
- The default Low level definition channel
|
||||
- A data interface of 16 bits samples coded in [-32768..32767]
|
||||
- Select input interaface name "microphone"
|
||||
|
||||
|
||||
set data callback: {#audio_river_read_get_data}
|
||||
==================
|
||||
|
||||
The best way to get data is to instanciate a simple callback.
|
||||
The callback is called when sample arrive and you have the nbChunk/frequency
|
||||
to process the data, otherwise you can generate error in data stream.
|
||||
|
||||
@snippet read.cpp audio_river_sample_set_callback
|
||||
|
||||
Callback inplementation: {#audio_river_read_callback}
|
||||
========================
|
||||
|
||||
Simply declare your function and do what you want inside.
|
||||
|
||||
@snippet read.cpp audio_river_sample_callback_implement
|
||||
|
||||
start and stop the stream: {#audio_river_read_start_stop}
|
||||
==========================
|
||||
|
||||
@snippet read.cpp audio_river_sample_read_start_stop
|
||||
|
||||
Remove interfaces: {#audio_river_read_reset}
|
||||
==================
|
||||
|
||||
@snippet read.cpp audio_river_sample_read_reset
|
||||
|
||||
|
||||
|
||||
|
||||
Full Sample: {#audio_river_read_full_sample}
|
||||
============
|
||||
|
||||
@snippet read.cpp audio_river_sample_read_all
|
@@ -1,57 +0,0 @@
|
||||
=?=River extract and build examples an example=?=
|
||||
|
||||
All developpement software will start by getting the dependency and the sources.
|
||||
|
||||
=== Linux dependency packages ===
|
||||
[code style=shell]
|
||||
sudo apt-get install g++ zlib1g-dev libasound2-dev
|
||||
# if you want to compile with clang :
|
||||
sudo apt-get install clang
|
||||
[/code]
|
||||
|
||||
|
||||
=== Download instructions ===
|
||||
|
||||
Download the software : This is the simple way You really need only a part of the ewol framework
|
||||
[code style=shell]
|
||||
# create a working directory path
|
||||
mkdir your_workspace_path
|
||||
cd your_workspace_path
|
||||
# clone ewol and all sub-library
|
||||
git clone git://github.com/HeeroYui/ewol.git
|
||||
cd ewol
|
||||
git submodule init
|
||||
git submodule update
|
||||
cd ..
|
||||
[/code]
|
||||
|
||||
If you prefer creating with the packege you needed :
|
||||
[code style=shell]
|
||||
mkdir -p your_workspace_path
|
||||
cd your_workspace_path
|
||||
# download all you needs
|
||||
git clone git://github.com/HeeroYui/lutin.git
|
||||
git clone git://github.com/HeeroYui/etk.git
|
||||
git clone git://github.com/HeeroYui/audio.git
|
||||
git clone git://github.com/HeeroYui/ejson.git
|
||||
git clone git://github.com/HeeroYui/airtaudio.git
|
||||
git clone git://github.com/HeeroYui/drain.git
|
||||
git clone git://github.com/HeeroYui/river.git
|
||||
[/code]
|
||||
|
||||
[note]
|
||||
The full build tool documentation is availlable here : [[http://heeroyui.github.io/lutin/ | lutin]]
|
||||
[/note]
|
||||
|
||||
=== Common build instructions ===
|
||||
|
||||
Build the basic examples & test:
|
||||
[code style=shell]
|
||||
./ewol/build/lutin.py -mdebug river_sample_read
|
||||
[/code]
|
||||
|
||||
To run an application you will find it directly on the out 'staging' tree :
|
||||
[code style=shell]
|
||||
./out/Linux/debug/staging/clang/river_sample_read/usr/bin/river_sample_read -l4
|
||||
[/code]
|
||||
|
@@ -1,158 +0,0 @@
|
||||
|
||||
=== Objectif ===
|
||||
:** Understand basis of river
|
||||
:** Create a simple recording interface that print the average of sample absolute value.
|
||||
|
||||
=== sample source: ===
|
||||
[[http://github.com/HeeroYui/river.git/sample/read/ | sample source]]
|
||||
|
||||
=== Bases: ===
|
||||
|
||||
When you will create an application based on the river audio interface you need :
|
||||
|
||||
==== Include: ====
|
||||
|
||||
Include manager and interface node
|
||||
|
||||
[code style=c++]
|
||||
#include <river/river.h>
|
||||
#include <river/Manager.h>
|
||||
#include <river/Interface.h>
|
||||
[/code]
|
||||
|
||||
==== Initilize the River library: ====
|
||||
|
||||
We first need to initialize etk sub library (needed to select the log level of sub-libraries and file access abstraction
|
||||
[code style=c++]
|
||||
// the only one init for etk:
|
||||
etk::init(_argc, _argv);
|
||||
[/code]
|
||||
|
||||
Now we will initilaize the river library.
|
||||
To do this We have 2 posibilities:
|
||||
:** With a file:
|
||||
[code style=c++]
|
||||
// initialize river interface
|
||||
river::init("DATA:configFileName.json");
|
||||
[/code]
|
||||
:** With a json string:
|
||||
[code style=c++]
|
||||
static const std::string configurationRiver =
|
||||
"{\n"
|
||||
" microphone:{\n"
|
||||
" io:'input',\n"
|
||||
" map-on:{\n"
|
||||
" interface:'auto',\n"
|
||||
" name:'default',\n"
|
||||
" },\n"
|
||||
" frequency:0,\n"
|
||||
" channel-map:['front-left', 'front-right'],\n"
|
||||
" type:'auto',\n"
|
||||
" nb-chunk:1024\n"
|
||||
" }\n"
|
||||
"}\n";
|
||||
// initialize river interface
|
||||
river::initString(configurationRiver);
|
||||
[/code]
|
||||
|
||||
For the example we select the second solution (faster to implement example and resource at the same position.
|
||||
|
||||
river::init / river::initString must be called only one time for all the application, this represent the hardware configuration.
|
||||
It is Nearly not dynamic
|
||||
|
||||
To understand the configuration file Please see [tutorial[004_ConfigurationFile | Configuration file]]
|
||||
|
||||
[note]
|
||||
This json is parsed by the [lib[ejson | e-json library]] it containe some update like:
|
||||
:** Optionnal " in the name of element.
|
||||
:** The possibilities to remplace " with '.
|
||||
[/note]
|
||||
|
||||
|
||||
==== Get the river interface manager: ====
|
||||
|
||||
An application can have many interface and only one Manager, And a process can contain many application.
|
||||
|
||||
Then, we will get the first application manager handle.
|
||||
|
||||
[code style=c++]
|
||||
// Create the River manager for tha application or part of the application.
|
||||
std11::shared_ptr<river::Manager> manager = river::Manager::create("river_sample_read");
|
||||
[/code]
|
||||
|
||||
[note]
|
||||
You can get back the application handle when you create a new one with the same name.
|
||||
[/note]
|
||||
|
||||
==== Create your read interface: ====
|
||||
|
||||
[code style=c++]
|
||||
// create interface:
|
||||
std11::shared_ptr<river::Interface> interface;
|
||||
//Get the generic input:
|
||||
interface = manager->createInput(48000,
|
||||
std::vector<audio::channel>(),
|
||||
audio::format_int16,
|
||||
"microphone");
|
||||
[/code]
|
||||
|
||||
Here we create an interface with:
|
||||
:** The frequency of 48000 Hz.
|
||||
:** The default Low level definition channel
|
||||
:** A data interface of 16 bits samples coded in [-32768..32767]
|
||||
:** Select input interaface name "microphone"
|
||||
|
||||
|
||||
==== Get datas: ====
|
||||
|
||||
The best way to get data is to instanciate a simple callback.
|
||||
The callback is called when sample arrive and you have the nbChunk/frequency
|
||||
to process the data, otherwise you can generate error in data stream.
|
||||
|
||||
|
||||
[code style=c++]
|
||||
// set callback mode ...
|
||||
interface->setInputCallback(std11::bind(&onDataReceived,
|
||||
std11::placeholders::_1,
|
||||
std11::placeholders::_2,
|
||||
std11::placeholders::_3,
|
||||
std11::placeholders::_4,
|
||||
std11::placeholders::_5,
|
||||
std11::placeholders::_6));
|
||||
[/code]
|
||||
|
||||
==== Callback inplementation: ====
|
||||
|
||||
Simply declare your function and do what you want inside.
|
||||
|
||||
[code style=c++]
|
||||
void onDataReceived(const void* _data,
|
||||
const std11::chrono::system_clock::time_point& _time,
|
||||
size_t _nbChunk,
|
||||
enum audio::format _format,
|
||||
uint32_t _frequency,
|
||||
const std::vector<audio::channel>& _map) {
|
||||
if (_format == audio::format_int16) {
|
||||
// stuff here
|
||||
}
|
||||
}
|
||||
[/code]
|
||||
|
||||
==== start and stop: ====
|
||||
|
||||
[code style=c++]
|
||||
// start the stream
|
||||
interface->start();
|
||||
// wait 10 second ...
|
||||
sleep(10);
|
||||
// stop the stream
|
||||
interface->stop();
|
||||
[/code]
|
||||
|
||||
==== Remove interfaces: ====
|
||||
|
||||
[code style=c++]
|
||||
// remove interface and manager.
|
||||
interface.reset();
|
||||
manager.reset();
|
||||
[/code]
|
@@ -1,84 +0,0 @@
|
||||
|
||||
=== Objectif ===
|
||||
:** Understand write audio stream
|
||||
|
||||
=== sample source: ===
|
||||
[[http://github.com/HeeroYui/river.git/sample/write/ | sample source]]
|
||||
|
||||
=== Bases: ===
|
||||
|
||||
The writing work nearly like the read turoral. Then we will just see what has change.
|
||||
|
||||
==== File configuration: ====
|
||||
|
||||
[code style=c++]
|
||||
static const std::string configurationRiver =
|
||||
"{\n"
|
||||
" speaker:{\n"
|
||||
" io:'output',\n"
|
||||
" map-on:{\n"
|
||||
" interface:'auto',\n"
|
||||
" name:'default',\n"
|
||||
" },\n"
|
||||
" frequency:0,\n"
|
||||
" channel-map:['front-left', 'front-right'],\n"
|
||||
" type:'auto',\n"
|
||||
" nb-chunk:1024,\n"
|
||||
" volume-name:'MASTER'\n"
|
||||
" }\n"
|
||||
"}\n";
|
||||
[/code]
|
||||
|
||||
==== Create your write interface: ====
|
||||
|
||||
[code style=c++]
|
||||
// create interface:
|
||||
std11::shared_ptr<river::Interface> interface;
|
||||
//Get the generic input:
|
||||
interface = manager->createOutput(48000,
|
||||
std::vector<audio::channel>(),
|
||||
audio::format_int16,
|
||||
"speaker");
|
||||
[/code]
|
||||
|
||||
Here we create an interface with:
|
||||
:** The frequency of 48000 Hz.
|
||||
:** The default Low level definition channel
|
||||
:** A data interface of 16 bits samples coded in [-32768..32767]
|
||||
:** Select input interaface name "speaker"
|
||||
|
||||
|
||||
==== write datas: ====
|
||||
|
||||
The best way to get data is to instanciate a simple callback.
|
||||
The callback is called when sample are needed and you have the nbChunk/frequency
|
||||
to generate the data, otherwise you can generate error in data stream.
|
||||
|
||||
|
||||
[code style=c++]
|
||||
// set callback mode ...
|
||||
interface->setOutputCallback(std11::bind(&onDataNeeded,
|
||||
std11::placeholders::_1,
|
||||
std11::placeholders::_2,
|
||||
std11::placeholders::_3,
|
||||
std11::placeholders::_4,
|
||||
std11::placeholders::_5,
|
||||
std11::placeholders::_6));
|
||||
[/code]
|
||||
|
||||
==== Callback inplementation: ====
|
||||
|
||||
Simply declare your function and do what you want inside.
|
||||
|
||||
[code style=c++]
|
||||
void onDataNeeded(void* _data,
|
||||
const std11::chrono::system_clock::time_point& _time,
|
||||
size_t _nbChunk,
|
||||
enum audio::format _format,
|
||||
uint32_t _frequency,
|
||||
const std::vector<audio::channel>& _map) {
|
||||
if (_format == audio::format_int16) {
|
||||
// stuff here
|
||||
}
|
||||
}
|
||||
[/code]
|
@@ -1,70 +0,0 @@
|
||||
|
||||
=== Objectif ===
|
||||
:** Understand the architecture of the configuration file.
|
||||
:** all that can be done with it.
|
||||
|
||||
|
||||
=== Basis: ===
|
||||
|
||||
The river configuration file is a json file. We use [lib[ejson | e-json library]] to parse it then we have some writing facilities.
|
||||
|
||||
|
||||
River provide a list a harware interface and virtual interface.
|
||||
|
||||
|
||||
The hardware interface are provided by [lib[airtaudio | AirTAudio library]] then we will plug on every platform.
|
||||
|
||||
|
||||
The file is simply architecture around a list of object:
|
||||
|
||||
[code style=json]
|
||||
{
|
||||
"speaker":{
|
||||
|
||||
},
|
||||
"microphone":{
|
||||
|
||||
},
|
||||
"mixed-in-out":{
|
||||
|
||||
},
|
||||
}
|
||||
[/code]
|
||||
|
||||
With this config we declare 3 interfaces : speaker, microphone and mixed-in-out.
|
||||
|
||||
|
||||
=== Harware configuration: ===
|
||||
|
||||
In every interface we need to define some Element:
|
||||
:** "io" :
|
||||
:: Can be input/output/... depending of virtual interface...
|
||||
:** "map-on": An object to configure airtaudio interface.
|
||||
:** "frequency": 0 to automatic select one. Or the frequency to open harware device
|
||||
:** "channel-map": List of all channel in the stream:
|
||||
::** "front-left"
|
||||
::** "front-center"
|
||||
::** "front-right"
|
||||
::** "rear-left"
|
||||
::** "rear-center"
|
||||
::** "rear-right"
|
||||
::** "surround-left",
|
||||
::** "surround-right",
|
||||
::** "sub-woofer",
|
||||
::** "lfe"
|
||||
:** "type": Fomat to open the stream:
|
||||
::** "auto": Detect the best type
|
||||
::** "int8",
|
||||
::** "int8-on-int16",
|
||||
::** "int16",
|
||||
::** "int16-on-int32",
|
||||
::** "int24",
|
||||
::** "int32",
|
||||
::** "int32-on-int64",
|
||||
::** "int64",
|
||||
::** "float",
|
||||
::** "double"
|
||||
:** "nb-chunk": Number of chunk to open the stream.
|
||||
|
||||
|
||||
|
53
doc/write.md
Normal file
53
doc/write.md
Normal file
@@ -0,0 +1,53 @@
|
||||
Write stream to Audio output {#audio_river_write}
|
||||
============================
|
||||
|
||||
@tableofcontents
|
||||
|
||||
Objectifs: {#audio_river_write_objectif}
|
||||
==========
|
||||
|
||||
- Understand write audio stream
|
||||
|
||||
The writing work nearly like the read turoral. Then we will just see what has change.
|
||||
|
||||
File configuration: {#audio_river_write_config}
|
||||
===================
|
||||
|
||||
@snippet write.cpp audio_river_sample_write_config_file
|
||||
|
||||
|
||||
Create your write interface: {#audio_river_write_interface}
|
||||
============================
|
||||
|
||||
Generic code:
|
||||
@snippet write.cpp audio_river_sample_create_write_interface
|
||||
|
||||
Here we create an interface with:
|
||||
- The frequency of 48000 Hz.
|
||||
- The default Low level definition channel
|
||||
- A data interface of 16 bits samples coded in [-32768..32767]
|
||||
- Select input interaface name "speaker"
|
||||
|
||||
|
||||
set data callback: {#audio_river_write_get_data}
|
||||
==================
|
||||
|
||||
The best way to get data is to instanciate a simple callback.
|
||||
The callback is called when sample are needed and you have the nbChunk/frequency
|
||||
to generate the data, otherwise you can generate error in data stream.
|
||||
|
||||
@snippet write.cpp audio_river_sample_set_callback
|
||||
|
||||
Callback inplementation: {#audio_river_write_callback}
|
||||
========================
|
||||
|
||||
Simply declare your function and do what you want inside.
|
||||
|
||||
@snippet write.cpp audio_river_sample_callback_implement
|
||||
|
||||
|
||||
|
||||
Full Sample: {#audio_river_write_full_sample}
|
||||
============
|
||||
|
||||
@snippet write.cpp audio_river_sample_write_all
|
Reference in New Issue
Block a user