Thanks, but SoundMeter class from your example seems to be using fresh instance of MediaRecorder, which writes to a file and not publishes that recorded sound to flashphoner stream (as far as I understand). Also, I also need to visualize sound coming in (via flashphoner stream.play()) and not only sound being published. I tried to use built-in android Visualizer class, but it doesn't work either. If I pass audio session id 0 (that is - mixed output, what you actually hear from the phone) - it doesn't provide any useful data, probably because WebRTC audio track has usage of VOICE_COMMUNICATION, denying recording. Maybe if I could get audio session id of WebRTC audio track (if there is one) - I could use Visualizer with this specific session id (instead of 0), but I didn't manage to find a way to get this session id either. I also didn't find a way to change WebRTC audio track usage to something other that VOICE_COMMUNICATION (to at least test my assumption).
If it will turn out to be not possible on client - can I get real-time access to published stream audio data on server? So that I can analyze it and feed data back to client?