Tested on Ubuntu 22.04.5
All applications need:
- Node 20
- pnpm
Client application needs:
- Xvfb, x11vnc, and fvwm
- ffmpeg
- Chrome
To install all prerequisites, you can use the following commands:
# Install Node.js 20
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
# Install pnpm
corepack enable pnpm# Install Xvfb, x11vnc, fvwm and ffmpeg
sudo apt-get install -y xvfb x11vnc fvwm ffmpeg
# Install Chrome
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo apt install -y ./google-chrome-stable_current_amd64.debClone the repository:
git clone https://github.com/ivchicano/mediasoup-LLLS-experiments.git
cd mediasoup-LLLS-experimentsClient expects a video and an audio file in directory client/media with the names and formats:
- fakevideo.y4m (download example file)
- fakeaudio.wav (download example file)
You can also create an AWS EC2 AMI with the master and worker ready to run with the script:
./aws/create_amis.sh -hYou can run the project by itself:
- Start the master first (it starts a local worker automatically):
cd master && pnpm install && pnpm run start - Run the client:
cd client && pnpm install && pnpm run start(configure theclient/config.jsonrunTestproperty to True to run an automatic test with recording)
Or run it with an AWS EC2 instance for the master (which will create worker instances in EC2) and multiple tries with the launch-experiment.sh script, for more info run it with (requires the client requirements installed as well as aws-cli and jq installed too):
./launch-experiment.sh -hWebRTC stats of the session will be saved in client/stats/stats.json
Recordings will be saved in client/recordings:
fullscreen.mp4is the recording of the full browser screenpublisher.webmis the recording of the stream sent by the publishersubscriber.webmis the recording of the stream received by the subscriber
When running with the launch-experiment.sh script, it will save all tries in the experiment_results directory.
- Python
- Pip
- Tesseract OCR
The example files provided contain the frame number in each frame of the video. This can be used to check the RTT by using the frame number in the full screen recording for each participant. Frame difference can be calculated by using the example files for the test and running the python script. Requires Python and Pip installed:
pip3 install -r qoe_scripts/requirements.py
python3 qoe_scripts/rtt_analyzer.py --helpThis will show the options for the python script. Set the video parameter to the full screen recording and set the OCR parameters to the positions of the locations in the video where the numbers are shown in the recording.
This will result in a text file with the OCR reading pairings between publisher and subscriber. This can be used by the analysis/analysis.ipynb Jupyter Notebook to analyze the results.
Sometimes the OCR readings might be wrong, the qoe_scripts/ocr_fixer.py script helps detect possible issues and correct them manually.
pip3 install -r qoe_scripts/requirements.py
python3 qoe_scripts/ocr_fixer.py --help