长生栈 长生栈
首页
  • 编程语言

    • C语言
    • C++
    • Java
    • Python
  • 数据结构和算法

    • 全排列算法实现
    • 动态规划算法
  • CMake
  • gitlab 安装和配置
  • docker快速搭建wordpress
  • electron+react开发和部署
  • Electron-创建你的应用程序
  • ImgUI编译环境
  • 搭建图集网站
  • 使用PlantUml画时序图
  • 友情链接
关于
收藏
  • 分类
  • 标签
  • 归档
GitHub (opens new window)

Living Team

编程技术分享
首页
  • 编程语言

    • C语言
    • C++
    • Java
    • Python
  • 数据结构和算法

    • 全排列算法实现
    • 动态规划算法
  • CMake
  • gitlab 安装和配置
  • docker快速搭建wordpress
  • electron+react开发和部署
  • Electron-创建你的应用程序
  • ImgUI编译环境
  • 搭建图集网站
  • 使用PlantUml画时序图
  • 友情链接
关于
收藏
  • 分类
  • 标签
  • 归档
GitHub (opens new window)
  • 计算机视觉

  • ESP32开发

  • Linux系统移植

  • 快速开始

  • 多媒体(Mutimedia)

    • Mutimedia开发-GStreamer的介绍和原理
    • Mutimedia开发-使用GStreamer实现视频播放的多音轨切换
      • 使用ffmpeg创建多音轨测试视频
      • 安装GStreamer开发环境
      • 多音轨视频播放及音轨切换的实现
        • 1. 使用python实现
        • 运行
        • 关键组件说明
        • 2. 使用c语言实现(自建pipeline)
        • 编译和运行
        • 3. 使用c语言实现(通过Playbin)
        • 编译和运行
        • 核心功能实现
      • 参考文档
  • 音频开发(Audio)

  • 编程小知识

  • 技术
  • 多媒体(Mutimedia)
DC Wang
2025-08-17
目录

Mutimedia开发-使用GStreamer实现视频播放的多音轨切换

# 使用GStreamer实现视频播放的多音轨切换

本文介绍了如何在Ubuntu22下,使用GStreamer来播放多音轨视频,并在播放中切换音轨。

# 使用ffmpeg创建多音轨测试视频

# 1. 创建测试视频源(10秒)
ffmpeg -f lavfi -i testsrc=duration=10:size=640x480:rate=30 \
       -c:v libx264 -pix_fmt yuv420p video.mp4

# 2. 创建三个不同语言的测试音频
ffmpeg -f lavfi -i sine=frequency=1000:duration=10 -ar 44100 -ac 2 -acodec aac \
       -metadata:s:a:0 language=eng -y track_eng.aac
ffmpeg -f lavfi -i sine=frequency=800:duration=10 -ar 44100 -ac 2 -acodec aac \
       -metadata:s:a:0 language=fre -y track_fre.aac
ffmpeg -f lavfi -i sine=frequency=600:duration=10 -ar 44100 -ac 2 -acodec aac \
       -metadata:s:a:0 language=ger -y track_ger.aac

# 3. 合并所有轨道到最终视频(MKV格式支持多音轨最佳)
ffmpeg -i video.mp4 -i track_eng.aac -i track_fre.aac -i track_ger.aac \
       -map 0:v:0 -map 1:a:0 -map 2:a:0 -map 3:a:0 \
       -c:v copy -c:a copy -disposition:a:0 default -disposition:a:1 0 -disposition:a:2 0 \
       -metadata:s:a:0 title="English" \
       -metadata:s:a:1 title="French" \
       -metadata:s:a:2 title="German" \
       multi_audio_video.mkv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

创建完成后生成的视频如下:

image-20250817093315126

可以使用VLC播放器来验证播放并切换视频的音轨。

# 安装GStreamer开发环境

参考官网: https://gstreamer.freedesktop.org/documentation/installing/index.html?gi-language=c

以Ubuntu22.04为例:

sudo apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
1

# 多音轨视频播放及音轨切换的实现

# 1. 使用python实现

# gi-test.py
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib
import sys

class MultiAudioPlayer:
    def __init__(self, uri):
        Gst.init(sys.argv)
        
        # 创建管道元件
        self.pipeline = Gst.Pipeline.new("multi-audio-player")
        self.uridecodebin = Gst.ElementFactory.make("uridecodebin", "uridecodebin")
        self.video_convert = Gst.ElementFactory.make("videoconvert", "videoconvert")
        self.video_sink = Gst.ElementFactory.make("autovideosink", "video_sink")
        self.audio_selector = Gst.ElementFactory.make("input-selector", "audio_selector")
        self.audio_convert = Gst.ElementFactory.make("audioconvert", "audioconvert")
        self.audio_resample = Gst.ElementFactory.make("audioresample", "audioresample")
        self.audio_sink = Gst.ElementFactory.make("autoaudiosink", "audio_sink")
        
        # 检查元件创建
        for element in [self.uridecodebin, self.video_convert, self.video_sink, 
                        self.audio_selector, self.audio_convert, self.audio_resample, self.audio_sink]:
            if not element:
                raise RuntimeError("创建元件失败")
        
        # 设置 URI
        self.uridecodebin.set_property("uri", uri)
        
        # 添加元件到管道
        for element in [self.uridecodebin, self.video_convert, self.video_sink,
                        self.audio_selector, self.audio_convert, self.audio_resample, self.audio_sink]:
            self.pipeline.add(element)
        
        # 连接固定元件
        self.video_convert.link(self.video_sink)
        self.audio_selector.link(self.audio_sink)
        # self.audio_selector.link(self.audio_convert)
        # self.audio_convert.link(self.audio_resample)
        # self.audio_resample.link(self.audio_sink)
        
        # 存储音频轨道信息
        self.audio_pads = []  # 存储所有音轨的selector sink pad
        self.audio_tracks = {}  # 音轨索引: 语言信息
        
        # 连接信号
        self.uridecodebin.connect("pad-added", self.on_pad_added)
        self.bus = self.pipeline.get_bus()
        self.bus.add_signal_watch()
        self.bus.connect("message", self.on_message)
        
        # 启动管道
        self.pipeline.set_state(Gst.State.PLAYING)
    
    def on_pad_added(self, element, pad):
        caps = pad.get_current_caps()
        if not caps:
            return
        
        # 解析媒体类型
        caps_str = caps.to_string()
        if "video/" in caps_str:
            # 视频处理
            video_queue = Gst.ElementFactory.make("queue", f"video_queue_{len(self.audio_pads)}")
            self.pipeline.add(video_queue)
            video_queue.sync_state_with_parent()
            
            pad.link(video_queue.get_static_pad("sink"))
            video_queue.link(self.video_convert)
            
        elif "audio/" in caps_str:
            # 音频处理
            # 创建新音频分支
            audio_queue = Gst.ElementFactory.make("queue", f"audio_queue_{len(self.audio_pads)}")
            audio_convert = Gst.ElementFactory.make("audioconvert", f"audioconvert_{len(self.audio_pads)}")
            audio_resample = Gst.ElementFactory.make("audioresample", f"audioresample_{len(self.audio_pads)}")
            
            for el in [audio_queue, audio_convert, audio_resample]:
                self.pipeline.add(el)
                el.sync_state_with_parent()
            
            # 连接新分支
            audio_queue.link(audio_convert)
            audio_convert.link(audio_resample)
            
            # 链接到选择器
            pad.link(audio_queue.get_static_pad("sink"))
            
            # 获取选择器的请求衬垫
            selector_sink_pad = self.audio_selector.get_request_pad("sink_%u")
            audio_resample.get_static_pad("src").link(selector_sink_pad)
            
            # 存储衬垫引用
            self.audio_pads.append(selector_sink_pad)
            
            # # 提取音轨语言信息
            # tags = pad.get_tags()
            # if tags and tags.get_tag_index(Gst.TAG_LANGUAGE_CODE)[0]:
            #     lang_code = tags.get_string(Gst.TAG_LANGUAGE_CODE)[1]
            #     print(f"发现音轨 {len(self.audio_pads)}: {lang_code}")
            #     self.audio_tracks[len(self.audio_pads)] = lang_code
    
    def switch_audio_track(self, track_index):
        if track_index < 0 or track_index >= len(self.audio_pads):
            print(f"无效音轨索引,共 {len(self.audio_pads)} 个音轨")
            return False
        
        # 设置激活的音轨
        ret = self.audio_selector.set_property("active-pad", self.audio_pads[track_index])
        if not ret:
            print(f"切换到音轨 {track_index} (语言: {self.audio_tracks.get(track_index+1, '未知')})")
        return ret
    
    def on_message(self, bus, message):
        t = message.type
        if t == Gst.MessageType.ERROR:
            err, debug = message.parse_error()
            print(f"错误: {err}, {debug}")
            self.pipeline.set_state(Gst.State.NULL)
            exit()
        elif t == Gst.MessageType.EOS:
            print("播放结束")
            self.pipeline.set_state(Gst.State.NULL)
            exit()

# 使用示例
if __name__ == "__main__":
    # 替换为你的视频文件路径或URL
    VIDEO_URI = "file:///mnt/e/Resources/Projects/GStreamer-test/video/multi_audio_video.mkv"
    
    player = MultiAudioPlayer(VIDEO_URI)
    
    # 创建主循环
    loop = GLib.MainLoop()
    
    # 示例:3秒后切换到第二个音轨
    def switch_track_0():
        print("\n=== 正在切换音轨 ===")
        player.switch_audio_track(0)  # 注意:索引从0开始

    def switch_track_1():
        print("\n=== 正在切换音轨 ===")
        player.switch_audio_track(1)  # 注意:索引从0开始

    def switch_track_2():
        print("\n=== 正在切换音轨 ===")
        player.switch_audio_track(2)  # 注意:索引从0开始
    
    GLib.timeout_add_seconds(3, switch_track_1)
    GLib.timeout_add_seconds(6, switch_track_2)
    GLib.timeout_add_seconds(9, switch_track_0)
    
    try:
        loop.run()
    except KeyboardInterrupt:
        loop.quit()
        player.pipeline.set_state(Gst.State.NULL)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157

# 运行

GStreamer默认集成在Python库中,所以不用再安装python包了,直接运行即可。

python3 gi-test.py
1

# 关键组件说明

  1. uridecodebin:
    • 自动处理容器解复用和解码
    • 发出pad-added信号用于连接音视频流
  2. input-selector:
    • 选择器元件,管理多个音频输入源
    • 使用active-pad属性动态切换音轨
    • 确保音频流无缝切换
  3. 音频处理链:
    • 每个音频流独立分支:queue → audioconvert → audioresample
    • queue元件处理流控和缓冲
    • audioconvert确保格式兼容性
    • audioresample处理不同采样率

# 2. 使用c语言实现(自建pipeline)

// multi_audio_player.c

#include <gst/gst.h>
#include <glib.h>
#include <stdio.h>

// 全局结构体存储状态
typedef struct {
    GstElement *pipeline;
    GstElement *selector;
    GList *audio_pads;  // 存储所有音轨的sink pad
    GMainLoop *loop;
    guint num_audio_tracks;
} PlayerState;

// 处理pad-added信号的回调函数
static void on_pad_added(GstElement *element, GstPad *pad, gpointer data) {
    PlayerState *state = (PlayerState *)data;
    GstCaps *caps = gst_pad_get_current_caps(pad);
    if (!caps) return;
    
    GstStructure *structure = gst_caps_get_structure(caps, 0);
    const gchar *name = gst_structure_get_name(structure);
    
    g_print("Pad added: %s\n", name);
    
    if (g_str_has_prefix(name, "video/")) {
        // 视频处理
        GstElement *queue = gst_element_factory_make("queue", "video_queue");
        GstElement *videoconvert = gst_bin_get_by_name(GST_BIN(state->pipeline), "videoconvert");
        GstElement *videosink = gst_bin_get_by_name(GST_BIN(state->pipeline), "videosink");
        
        if (queue && videoconvert && videosink) {
            gst_bin_add(GST_BIN(state->pipeline), queue);
            gst_element_link_many(queue, videoconvert, videosink, NULL);
            
            GstPad *sinkpad = gst_element_get_static_pad(queue, "sink");
            gst_pad_link(pad, sinkpad);
            gst_object_unref(sinkpad);
            
            gst_element_sync_state_with_parent(queue);
            g_print("视频轨道已连接\n");
        } else {
            g_printerr("无法创建视频处理元素\n");
        }
        
        if (videoconvert) gst_object_unref(videoconvert);
        if (videosink) gst_object_unref(videosink);
    } else if (g_str_has_prefix(name, "audio/")) {
        // 音频处理 - 为每个音轨创建完整的处理链
        state->num_audio_tracks++;
        gchar *queue_name = g_strdup_printf("audio_queue_%u", state->num_audio_tracks);
        gchar *convert_name = g_strdup_printf("audioconvert_%u", state->num_audio_tracks);
        gchar *resample_name = g_strdup_printf("audioresample_%u", state->num_audio_tracks);
        gchar *capsfilter_name = g_strdup_printf("capsfilter_%u", state->num_audio_tracks);
        
        GstElement *queue = gst_element_factory_make("queue", queue_name);
        GstElement *audioconvert = gst_element_factory_make("audioconvert", convert_name);
        GstElement *audioresample = gst_element_factory_make("audioresample", resample_name);
        GstElement *capsfilter = gst_element_factory_make("capsfilter", capsfilter_name);
        
        g_free(queue_name);
        g_free(convert_name);
        g_free(resample_name);
        g_free(capsfilter_name);
        
        if (queue && audioconvert && audioresample && capsfilter) {
            gst_bin_add_many(GST_BIN(state->pipeline), queue, audioconvert, audioresample, capsfilter, NULL);
            
            // 设置capsfilter为通用格式
            GstCaps *audio_caps = gst_caps_new_simple("audio/x-raw",
                "format", G_TYPE_STRING, "S16LE",
                "channels", G_TYPE_INT, 2,
                "rate", G_TYPE_INT, 48000,
                NULL);
            g_object_set(capsfilter, "caps", audio_caps, NULL);
            gst_caps_unref(audio_caps);
            
            // 链接音频处理链
            gst_element_link_many(queue, audioconvert, audioresample, capsfilter, NULL);
            
            // 连接到选择器
            GstPad *sinkpad = gst_element_get_request_pad(state->selector, "sink_%u");
            if (sinkpad) {
                GstPad *srcpad = gst_element_get_static_pad(capsfilter, "src");
                
                if (srcpad && gst_pad_link(srcpad, sinkpad) == GST_PAD_LINK_OK) {
                    state->audio_pads = g_list_append(state->audio_pads, sinkpad);
                    
                    // 连接到源
                    GstPad *queue_sink = gst_element_get_static_pad(queue, "sink");
                    gst_pad_link(pad, queue_sink);
                    gst_object_unref(queue_sink);
                    
                    // 设置元件状态
                    gst_element_sync_state_with_parent(queue);
                    gst_element_sync_state_with_parent(audioconvert);
                    gst_element_sync_state_with_parent(audioresample);
                    gst_element_sync_state_with_parent(capsfilter);
                    
                    g_print("添加音频轨道 %u, 总轨道数: %u\n", 
                           state->num_audio_tracks, g_list_length(state->audio_pads));
                    
                    // // 提取音轨元数据
                    // GstTagList *tags = NULL;
                    // if (gst_pad_get_tags(pad, &tags) && tags) {
                    //     gchar *lang_code = NULL;
                    //     if (gst_tag_list_get_string(tags, GST_TAG_LANGUAGE_CODE, &lang_code)) {
                    //         g_print(" - 语言: %s\n", lang_code ? lang_code : "未知");
                    //         g_free(lang_code);
                    //     }
                    //     gst_tag_list_unref(tags);
                    // }
                } else {
                    g_printerr("无法连接队列到选择器\n");
                    if (srcpad) gst_object_unref(srcpad);
                    gst_object_unref(sinkpad);
                }
            } else {
                g_printerr("无法获取选择器的sink pad\n");
            }
        } else {
            g_printerr("无法创建音频处理元素\n");
        }
    }
    
    gst_caps_unref(caps);
}

// 处理总线消息的回调函数
static gboolean bus_callback(GstBus *bus, GstMessage *msg, gpointer data) {
    PlayerState *state = (PlayerState *)data;
    
    switch (GST_MESSAGE_TYPE(msg)) {
        case GST_MESSAGE_ERROR: {
            GError *err = NULL;
            gchar *debug = NULL;
            gst_message_parse_error(msg, &err, &debug);
            g_printerr("错误: %s\n", err->message);
            if (debug) g_printerr("调试信息: %s\n", debug);
            g_error_free(err);
            g_free(debug);
            g_main_loop_quit(state->loop);
            break;
        }
        case GST_MESSAGE_EOS:
            g_print("播放结束\n");
            g_main_loop_quit(state->loop);
            break;
        case GST_MESSAGE_STATE_CHANGED:
            if (GST_MESSAGE_SRC(msg) == GST_OBJECT(state->pipeline)) {
                GstState old_state, new_state, pending;
                gst_message_parse_state_changed(msg, &old_state, &new_state, &pending);
                g_print("管道状态变更: %s -> %s\n",
                       gst_element_state_get_name(old_state),
                       gst_element_state_get_name(new_state));
            }
            break;
        case GST_MESSAGE_STREAM_STATUS:
            // 处理流状态变化
            break;
        case GST_MESSAGE_STREAM_START:
            // 处理流开始
            break;
        default:
            break;
    }
    return TRUE;
}

// 切换音轨的回调函数
static gboolean switch_audio_track(gpointer data) {
    PlayerState *state = (PlayerState *)data;
    static guint track_index = 0;
    
    guint num_tracks = g_list_length(state->audio_pads);
    if (num_tracks > 0) {
        track_index = (track_index + 1) % num_tracks;
        GstPad *pad = (GstPad *)g_list_nth_data(state->audio_pads, track_index);
        
        if (pad) {
            // 暂停管道以避免切换时的同步问题
            gst_element_set_state(state->pipeline, GST_STATE_PAUSED);
            
            // 设置新的活动pad
            g_object_set(state->selector, "active-pad", pad, NULL);
            
            // 恢复播放
            gst_element_set_state(state->pipeline, GST_STATE_PLAYING);
            
            g_print("切换到音轨 %u\n", track_index);
        }
    }
    
    return TRUE;  // 继续定时器
}

int main(int argc, char *argv[]) {
    gst_init(&argc, &argv);
    
    if (argc < 2) {
        g_printerr("用法: %s <视频文件>\n", argv[0]);
        return -1;
    }
    
    // 创建主循环
    GMainLoop *loop = g_main_loop_new(NULL, FALSE);
    
    // 初始化状态
    PlayerState state = {
        .pipeline = NULL,
        .selector = NULL,
        .audio_pads = NULL,
        .loop = loop,
        .num_audio_tracks = 0
    };
    
    // 创建管道
    state.pipeline = gst_pipeline_new("multi-audio-player");
    
    // 创建元件
    GstElement *uridecodebin = gst_element_factory_make("uridecodebin", "source");
    GstElement *videoconvert = gst_element_factory_make("videoconvert", "videoconvert");
    GstElement *videosink = gst_element_factory_make("autovideosink", "videosink");
    state.selector = gst_element_factory_make("input-selector", "audio_selector");
    GstElement *audioconvert = gst_element_factory_make("audioconvert", "audioconvert");
    GstElement *audioresample = gst_element_factory_make("audioresample", "audioresample");
    GstElement *audiosink = gst_element_factory_make("autoaudiosink", "audiosink");
    
    // 检查元件创建
    if (!state.pipeline || !uridecodebin || !videoconvert || !videosink || 
        !state.selector || !audioconvert || !audioresample || !audiosink) {
        g_printerr("无法创建元件\n");
        return -1;
    }
    
    // 设置URI
    gchar *uri = gst_filename_to_uri(argv[1], NULL);
    g_object_set(uridecodebin, "uri", uri, NULL);
    g_free(uri);
    
    // 添加所有元件到pipeline
    gst_bin_add_many(GST_BIN(state.pipeline), uridecodebin, videoconvert, videosink,
                     state.selector, audioconvert, audioresample, audiosink, NULL);
    
    // 连接固定部分: 公共音频链
    if (!gst_element_link_many(state.selector, audioconvert, audioresample, audiosink, NULL)) {
        g_printerr("无法连接音频链\n");
        return -1;
    }
    
    // 连接信号
    g_signal_connect(uridecodebin, "pad-added", G_CALLBACK(on_pad_added), &state);
    
    // 设置总线监视
    GstBus *bus = gst_element_get_bus(state.pipeline);
    gst_bus_add_watch(bus, bus_callback, &state);
    gst_object_unref(bus);
    
    // 设置管道状态为PLAYING
    GstStateChangeReturn ret = gst_element_set_state(state.pipeline, GST_STATE_PLAYING);
    if (ret == GST_STATE_CHANGE_FAILURE) {
        g_printerr("无法设置管道为PLAYING状态\n");
        gst_object_unref(state.pipeline);
        return -1;
    }
    
    // 添加定时器切换音轨 (每隔5秒切换一次)
    g_timeout_add_seconds(5, switch_audio_track, &state);
    
    // 启动主循环
    g_print("开始播放,自动切换音轨...\n");
    g_main_loop_run(loop);
    
    // 清理资源
    gst_element_set_state(state.pipeline, GST_STATE_NULL);
    gst_object_unref(state.pipeline);
    
    // 清理音频pad列表
    g_list_free_full(state.audio_pads, gst_object_unref);
    
    g_main_loop_unref(loop);
    return 0;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284

# 编译和运行

编译:

gcc -o multi_audio_player multi_audio_player.c $(pkg-config --cflags --libs gstreamer-1.0)
1

运行:

./multi_audio_player /path/to/your/multi_audio_video.mkv
1

# 3. 使用c语言实现(通过Playbin)

playbin是 GStreamer 提供的高级播放元素,它封装了完整的播放功能,包括多音轨支持。使用 playbin可以大大简化多音轨视频播放的实现。

// playbin_player.c

#include <gst/gst.h>
#include <glib.h>
#include <locale.h>

typedef struct {
    GstElement *pipeline;  // playbin 元素
    GMainLoop *loop;
    guint num_audio_tracks;
    guint timer_id;        // 存储定时器ID
} PlayerState;

// 处理总线消息的回调函数
static gboolean bus_callback(GstBus *bus, GstMessage *msg, gpointer data) {
    PlayerState *state = (PlayerState *)data;
    
    switch (GST_MESSAGE_TYPE(msg)) {
        case GST_MESSAGE_ERROR: {
            GError *err = NULL;
            gchar *debug = NULL;
            gst_message_parse_error(msg, &err, &debug);
            g_printerr("错误: %s\n", err->message);
            if (debug) g_printerr("调试信息: %s\n", debug);
            g_error_free(err);
            g_free(debug);
            
            // 停止定时器
            if (state->timer_id != 0) {
                g_source_remove(state->timer_id);
                state->timer_id = 0;
            }
            
            g_main_loop_quit(state->loop);
            break;
        }
        case GST_MESSAGE_EOS:
            g_print("播放结束\n");
            
            // 停止定时器
            if (state->timer_id != 0) {
                g_source_remove(state->timer_id);
                state->timer_id = 0;
            }
            
            g_main_loop_quit(state->loop);
            break;
        case GST_MESSAGE_STATE_CHANGED:
            if (GST_MESSAGE_SRC(msg) == GST_OBJECT(state->pipeline)) {
                GstState old_state, new_state, pending;
                gst_message_parse_state_changed(msg, &old_state, &new_state, &pending);
                
                // 当管道进入PLAYING状态时,获取音轨信息
                if (new_state == GST_STATE_PLAYING && old_state == GST_STATE_PAUSED) {
                    // 获取音轨数量
                    g_object_get(state->pipeline, "n-audio", &state->num_audio_tracks, NULL);
                    g_print("检测到 %u 个音轨\n", state->num_audio_tracks);
                    
                    // 打印每个音轨的语言信息
                    for (guint i = 0; i < state->num_audio_tracks; i++) {
                        GstTagList *tags = NULL;
                        g_signal_emit_by_name(state->pipeline, "get-audio-tags", i, &tags);
                        
                        if (tags) {
                            gchar *lang_code = NULL;
                            if (gst_tag_list_get_string(tags, "title", &lang_code)) {
                                g_print("音轨 %u: 语言: %s\n", i, lang_code ? lang_code : "未知");
                                g_free(lang_code);
                            }
                            gst_tag_list_unref(tags);
                        }
                    }
                }
            }
            break;
        case GST_MESSAGE_STREAM_COLLECTION: {
            // 处理流集合消息(包含所有音视频流信息)
            GstStreamCollection *collection;
            gst_message_parse_stream_collection(msg, &collection);
            
            guint num_streams = gst_stream_collection_get_size(collection);
            g_print("流集合更新,共 %u 个流\n", num_streams);
            
            for (guint i = 0; i < num_streams; i++) {
                GstStream *stream = gst_stream_collection_get_stream(collection, i);
                GstStreamType stream_type = gst_stream_get_stream_type(stream);
                
                if (stream_type & GST_STREAM_TYPE_AUDIO) {
                    gchar *language = NULL;
                    GstTagList *tags = gst_stream_get_tags(stream);
                    if (tags) {
                        if (!gst_tag_list_get_string(tags, "title", &language)) {
                            language = NULL;
                        }
                    }
                    
                    g_print("音频流 %u: %s\n", i, language ? language : "未知语言");
                }
            }
            
            gst_object_unref(collection);
            break;
        }
        default:
            break;
    }
    return TRUE;
}

// 切换音轨的回调函数
static gboolean switch_audio_track(gpointer data) {
    PlayerState *state = (PlayerState *)data;
    static guint current_track = 0;
    
    // 检查管道是否仍在播放
    GstState current_state;
    gst_element_get_state(state->pipeline, &current_state, NULL, GST_CLOCK_TIME_NONE);
    
    if (current_state != GST_STATE_PLAYING) {
        g_print("播放已停止,取消定时器\n");
        state->timer_id = 0;  // 标记定时器已移除
        return G_SOURCE_REMOVE;  // 移除定时器
    }
    
    if (state->num_audio_tracks > 0) {
        current_track = (current_track + 1) % state->num_audio_tracks;
        
        // 设置当前音轨
        g_object_set(state->pipeline, "current-audio", current_track, NULL);
        
        // 获取当前音轨的语言
        GstTagList *tags = NULL;
        g_signal_emit_by_name(state->pipeline, "get-audio-tags", current_track, &tags);
        
        if (tags) {
            gchar *lang_code = NULL;
            if (gst_tag_list_get_string(tags, "title", &lang_code)) {
                g_print("切换到音轨 %u (语言: %s)\n", current_track, lang_code ? lang_code : "未知");
                g_free(lang_code);
            }
            gst_tag_list_unref(tags);
        } else {
            g_print("切换到音轨 %u\n", current_track);
        }
    }
    
    return TRUE;  // 继续定时器
}

int main(int argc, char *argv[]) {
    // 设置区域设置以支持中文输出
    setlocale(LC_ALL, "zh_CN.UTF-8");
    
    gst_init(&argc, &argv);
    
    if (argc < 2) {
        g_printerr("用法: %s <视频文件>\n", argv[0]);
        return -1;
    }
    
    // 创建主循环
    GMainLoop *loop = g_main_loop_new(NULL, FALSE);
    
    // 初始化状态
    PlayerState state = {
        .pipeline = NULL,
        .loop = loop,
        .num_audio_tracks = 0,
        .timer_id = 0  // 初始化定时器ID为0
    };
    
    // 创建 playbin 管道
    state.pipeline = gst_element_factory_make("playbin", "player");
    if (!state.pipeline) {
        g_printerr("无法创建 playbin 元素\n");
        return -1;
    }
    
    // 设置 URI
    gchar *uri = gst_filename_to_uri(argv[1], NULL);
    if (!uri) {
        g_printerr("无法转换文件路径为 URI: %s\n", argv[1]);
        return -1;
    }
    
    g_object_set(state.pipeline, "uri", uri, NULL);
    g_free(uri);
    
    // 设置总线监视
    GstBus *bus = gst_element_get_bus(state.pipeline);
    gst_bus_add_watch(bus, bus_callback, &state);
    gst_object_unref(bus);
    
    // 设置管道状态为PLAYING
    GstStateChangeReturn ret = gst_element_set_state(state.pipeline, GST_STATE_PLAYING);
    if (ret == GST_STATE_CHANGE_FAILURE) {
        g_printerr("无法设置管道为PLAYING状态\n");
        gst_object_unref(state.pipeline);
        return -1;
    }
    
    // 添加定时器切换音轨 (每隔3秒切换一次)
    state.timer_id = g_timeout_add_seconds(3, switch_audio_track, &state);
    
    // 启动主循环
    g_print("开始播放,自动切换音轨...\n");
    g_main_loop_run(loop);
    
    // 清理资源
    gst_element_set_state(state.pipeline, GST_STATE_NULL);
    gst_object_unref(state.pipeline);
    g_main_loop_unref(loop);
    
    return 0;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215

# 编译和运行

编译:

gcc -o playbin_player playbin_player.c $(pkg-config --cflags --libs gstreamer-1.0)
1

运行:

./playbin_player /path/to/multi_audio_video.mkv
1

# 核心功能实现

  1. 获取音轨信息

    // 获取音轨数量
    g_object_get(state->pipeline, "n-audio", &state->num_audio_tracks, NULL);
    
    // 获取特定音轨的标签信息
    GstTagList *tags = NULL;
    g_signal_emit_by_name(state->pipeline, "get-audio-tags", i, &tags);
    
    // 从标签中提取语言代码
    gchar *lang_code = NULL;
    if (gst_tag_list_get_string(tags, "title", &lang_code)) {
        g_print("音轨 %u: 语言: %s\n", i, lang_code ? lang_code : "未知");
        g_free(lang_code);
    }
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
  2. 切换音轨

    // 设置当前音轨
    g_object_set(state->pipeline, "current-audio", track_index, NULL);
    
    1
    2

# 参考文档

官网:https://gstreamer.freedesktop.org/documentation

编辑 (opens new window)
#Mutimedia
Mutimedia开发-GStreamer的介绍和原理
Linux Audio开发-介绍和名词解释

← Mutimedia开发-GStreamer的介绍和原理 Linux Audio开发-介绍和名词解释→

最近更新
01
Linux Audio开发-ALSA使用内存映射(MMAP)的录音实现(Ubuntu22)
08-17
02
Linux Audio开发-使用ALSA驱动进行录音和播放(Ubuntu22)
08-17
03
Linux Audio开发-ALSA API列表
08-17
更多文章>
Theme by Vdoing | Copyright © 2019-2025 DC Wang All right reserved | 辽公网安备21029602001058号 | 吉ICP备20001966号-2
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式