i'm tryng to use ffmpeg to stream some audio/video in RTP. I've written my
own code to implement a RTSP server in a different thread to reply to the
client requests and sends the correct sdp. After the RTSP dialogue I start
to send audio/video frames after a transcoding. My problem is that audio
and video frames comes in the correct order at the first client, but from
the second one (or after a pause resume) audio frames are played with 10/20
seconds and more of late and the correct syncronization is lost.
I checked some other correct streams and maybe the problem can be in the
correct PTS/DTS value sets in every packet. I start all two streams with the
PTS sets to 0 (RFC tells to set it to a random value....0 is my random :) )
increment it of a value depending to the codec and frame rate (e.g. MPEG4
video frame at 25 fps, come with an increment of 90000/25=3600 tick), and I
send every frame with the correct temporization (the same video MPEG4 frame
at 25fps every 40 milliseconds). I assume also that no B-frames are present
(so the DTS must be equal to PTS..or not?). The same for the audio stream.
Next clients receive audio and video frames (in broadcast-like mode, so they
starts the playing from the point where server is arrived at that moment)
with the first PTS sets to 0.
Anyone can tell me where can be the problem? I think there are problems in
the PTS settings but I don't know where.