I encode the video frame by frame and use x264 + ffmpeg for encoding. The original video plays beautifully, but the first few frames of my transcoded video show gray artifacts. I understand that this is due to time compression, and these artifacts disappear after a few frames.
See these two images, which are the first and second frames. The third frame is normal (i.e. there is no gray artifact and does not blur like the second) 

How can I make the first frame to be a key frame (i.e. fully encoded in my output video) so that these artifacts are not displayed?
Edit - more details
Here is what I do in more detail. I used bit-difference tutorials to read video frame by frame and transcoded each frame to a new video. My encoding options are as follows:
avcodec_get_context_defaults3(c, *codec); c->codec_id = codec_id; c->bit_rate = output_bitrate; c->width = output_width; c->height = output_height; st->r_frame_rate.num = output_framerate_num; st->r_frame_rate.den = output_framerate_den; c->time_base.den = output_timebase_den; c->time_base.num = output_timebase_num; c->gop_size = 3; c->pix_fmt = STREAM_PIX_FMT; if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) { c->max_b_frames = 2; } if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) { c->mb_decision = 2; } c->max_b_frames = 2; c->scenechange_threshold = 0; c->rc_buffer_size = 0; c->me_method = ME_ZERO;
Then I process each frame, maybe something is wrong there. Decoding Bit:
while(av_read_frame(gFormatCtx, &packet)>=0) { // Is this a packet from the video stream? if(packet.stream_index==gVideoStreamIndex) { // Decode video frame avcodec_decode_video2(gVideoCodecCtx, pCurrentFrame, &frameFinished, &packet); // Did we get a video frame? if(frameFinished) { [...] if(firstPts == -999) /*Initial value*/ firstPts = packet.pts; deltaPts = packet.pts - firstPts; double seconds = deltaPts*av_q2d(gFormatCtx->streams[gVideoStreamIndex]->time_base); [...] muxing_writeVideoFrame(pCurrentFrame, packet.pts); } } }
Actual letter:
int muxing_writeVideoFrame(AVFrame *frame, int64_t pts) { frameCount = frameCount +1; if(frameCount > 0) { if (video_st) video_pts = (double)video_st->pts.val * video_st->time_base.num / video_st->time_base.den; else video_pts = 0.0; if (video_st && !(video_st && audio_st && audio_pts < video_pts)) { frame->pts = pts;//av_rescale_q(frame_count, video_st->codec->time_base, video_st->time_base); write_video_frame(oc, video_st, frame); } } return 0; } static int write_video_frame(AVFormatContext *oc, AVStream *st, AVFrame *frame) { int ret; static struct SwsContext *sws_ctx; //LOGI(10, frame_count); AVCodecContext *c = st->codec; /* encode the image */ AVPacket pkt; int got_output; av_init_packet(&pkt); pkt.data = NULL; // packet data will be allocated by the encoder pkt.size = 0; ret = avcodec_encode_video2(c, &pkt, frame, &got_output); if (ret < 0) { fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret)); exit(1); } /* If size is zero, it means the image was buffered. */ if (got_output) { if (c->coded_frame->key_frame) pkt.flags |= AV_PKT_FLAG_KEY; pkt.stream_index = st->index; /* Write the compressed frame to the media file. */ ret = av_interleaved_write_frame(oc, &pkt); } else { ret = 0; } if (ret != 0) { LOGI(10, av_err2str(ret)); exit(1); } frame_count++; return got_output; }