ImageVerifierCode 换一换
格式:DOC , 页数:25 ,大小:119.04KB ,
资源ID:4541474      下载积分:8 金币
验证码下载
登录下载
邮箱/手机:
验证码: 获取验证码
温馨提示:
支付成功后,系统会自动生成账号(用户名为邮箱或者手机号,密码是验证码),方便下次登录下载和查询订单;
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

开通VIP
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.zixin.com.cn/docdown/4541474.html】到电脑端继续下载(重复下载【60天内】不扣币)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  
声明  |  会员权益     获赠5币     写作写作

1、填表:    下载求助     索取发票    退款申请
2、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
3、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
4、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
5、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前自行私信或留言给上传者【丰****】。
6、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
7、本文档遇到问题,请及时私信或留言给本站上传会员【丰****】,需本站解决可联系【 微信客服】、【 QQ客服】,若有其他问题请点击或扫码反馈【 服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【 版权申诉】”(推荐),意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4008-655-100;投诉/维权电话:4009-655-100。

注意事项

本文(语音信号识别及处理中英文翻译文献综述.doc)为本站上传会员【丰****】主动上传,咨信网仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知咨信网(发送邮件至1219186828@qq.com、拔打电话4008-655-100或【 微信客服】、【 QQ客服】),核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载【60天内】不扣币。 服务填表

语音信号识别及处理中英文翻译文献综述.doc

1、 语音识别 在计算机技术中,语音识别是指为了达成说话者发音而由计算机生成功效,利用计算机识他人类语音技术。(比如,抄录讲话文本,数据项;经营电子和机械设备; 电话自动化处理) ,是经过所谓自然语言处理计算机语音技术一个主要元素。经过计算机语音处理技术,来自语音发音系统由人类创造声音,包含肺,声带和舌头,经过接触,语音模式改变在婴儿期、儿童学习认识有不一样模式,尽管由不一样人发音,比如,在音调,语气,强调,语气模式不一样发音相同词或短语,大脑认知能力,能够使人类实现这一非凡能力。在撰写本文时(),我们能够重现,语音识别技术不只表现在有限程度电脑能力上,在其余许多方面也是有用。 语音识别技术挑战

2、古老书写系统,要回溯到苏美尔人六千年前。他们能够将模拟录音经过留声机进行语音播放,直到 1877 年。然而,因为与语音识别各种各样问题,语音识别不得不等候着计算机发展。 首先,演讲不是简单口语文本一样道理,戴维斯极难捕捉到一个 note-for-note曲作为乐谱。人类所了解词、短语或句子离散与清楚边界实际上是将信号连续流,而不是听起来: I went to the store yesterday昨天我去商店。单词也能够混合,用Whadd ayawa 吗?这代表着你想要做什么。第二,没有一对一声音和字母之间相关性。在英语,有略多于5个元音字母a,e,i,o,u,有时y和w。有超出二十多个不一样

3、元音, 即使,精准统计能够取决于演讲者口音而定。但相反问题也会发生,在那里一个以上信号能再现某一特定声音。字母C 能够有相同字母K 声音,如蛋糕,或作为字母S,如柑橘。 另外,说同一语言人使用不相同声音,即语言不一样,他们声音语音或模式组织,有不一样口音。比如“水”这个词,wadder能够显著watter,woader wattah等等。每个人都有独特音量男人说话时候,通常开最低音,妇女和儿童具备更高音高 (即使每个人都有广泛变异和重合)。发音能够被邻近声音、说话者速度和说话者健康情况所影响,当一个人感冒时候,就要考虑发音改变。最终,考虑到不是全部语音都是有意义声音组成。通常语音本身是没有任何

4、意义,但有些用作分手话语以传达说话人微妙感情或动机信息:哦,就像,你知道,好。也有一些听起来都不认为是字,这是一项词性:呃,嗯,嗯。嗽、打喷嚏、谈笑风生、呜咽,甚至打嗝能够成为上述内容之一。在噪杂地方与环境本身噪声中,即使语音识别也是困难。 “我昨天去了商店”波形图 “我昨天去了商店”光谱图 语音识别发展史 尽管困难重重,语音识别技术却伴随数字计算机诞生一直被努力着。早在 1952 年,研究人员在贝尔试验室就已开发出了一个自动数字识别器,取名“奥黛丽”。假如说话人是男性,而且发音者在词与词之间停顿 350毫秒并把把词汇限制在19之间数字,再加上“哦”,另外假如这台机器能够调整到适应说话者语音习

5、惯,奥黛丽精准度将达成9799,假如识别器不能够调整自己,那么精准度将低至60. 奥黛丽经过识别音素或者两个截然不一样声音工作。这些原因与识别器经训练产生参考音素是关于联。在接下来 20 年里研究人员花了大量时间和金钱来改进这个概念,不过少有成功。计算机硬件突飞猛进、语音合成技术稳步提升,乔姆斯基生成语法理论认为语言能够被程序性地分析。然而,这些似乎并没有提升语音识别技术。乔姆斯基和哈里语法生成工作也造成主流语言学放弃音素概念,转而选择将语言声音模式分解成更小、更易离散特征。 1969年皮尔斯坦率地写了一封信给美国声学学会会刊,大部分关于语音识别研究结果都发表在上面。皮尔斯是卫星通信先驱之一,

6、而且是贝尔试验室执行副主任,贝尔试验室在语音识别研究中处于领先地位。皮尔斯说全部参加研究人都是在浪费时间和金钱。 假如你认为一个人之所以从事语音识别方面研究是因为他能得到金钱,那就太草率了。这种吸引力可能类似于把水变成汽油、从海水中提取黄金、治愈癌症或者登月诱惑。一个人不可能用削减肥皂成本10方法简单地得到钱。假如想骗到人,他要用欺诈和诱惑。 皮尔斯 1969 年信标志着在贝尔试验室连续了十年研究结束了。然而,国防研究机构 ARPA 选择了坚持下去。1971 年他们资助了一项开发一个语音识别器研究计划,这种语音识别器要能够处理最少 1000 个词而且能够了解相互连接语音,即在语音中没有词语之间

7、显著停顿。这种语音识别器能够假设一个存在轻微噪音背景环境,而且它不需要在真正时间中工作。 到 1976年,三个承包企业已经开发出六种系统。最成功是由卡耐基麦隆大学开发叫做“Harpy”系统。“Harpy”比较慢,四秒钟句子要花费五分多钟时间来处理。而且它还要求发音者经过说句子来建立一个参考模型。然而,它确实识别出了1000 个词汇,而且支持连音识别。 研究经过各种路径继续着,不过“Harpy”已经成为未来成功模型。它应用隐马尔科夫模型和统计模型来提取语音意义。本质上,语音被分解成了相互重合声音片段和被认为最可能词或词部分所组成几率模型。整个程序计算复杂,但它是最成功。 在1970s到1980s

8、之间,关于语音识别研究继续进行着。到 1980s,大部分研究者都在使用隐马尔科夫模型,这种模型支持着当代全部语音识别器。在1980s后期和 1990s,DARPA 资助了一些研究。第一项研究类似于以前碰到挑战,即 1000个词汇量,不过这次要求愈加精准。这个项目使系统词汇犯错率从10下降了一些。其余研究项目都把精力集中在改进算法和提升计算效率上。 微软公布了一个能够与0ffice XP 同时工作语音识别系统。它把50年来这项技术发展和缺点都包含在内了。这个系统必须用大作家作品来训练为适应某种指定声音,比如埃德加爱伦坡厄舍古屋崩塌和比尔盖茨前进道路。即使在训练之后,该系统依然是脆弱,以至于还提供

9、了一个警告:“假如你改变使用微软语音识别系统地点造成准确率将降低,请重新开启麦克风”。从另首先来说,该系统确实能够在真实时间中工作,而且它确实能识别连音。 语音识别今天 技术 当今语音识别技术着力于经过共振和光谱分析来对我们声音产生声波进行数学分析。计算机系统第一次经过数字模拟转换器统计了经过麦克风传来声波。那种当我们说一个词时候所产生模拟或者连续声波被分割成了一些时间碎片,然后这些碎片按照它们振幅水平被度量,振幅是指从一个说话者口中产生空气压力。为了测量振幅水平而且将声波转换成为数字格式,现在语音识别研究普遍采取了奈奎斯特香农定理。 奈奎斯特香农定理 奈奎斯特香农定理是在 1928 年研究发

10、觉,该定理表明一个给定模拟频率能够由一个是原始模拟频率两倍数字频率重建出来。奈奎斯特证实了该规律真实性,因为一个声波频率必须因为压缩和疏散各取样一次。比如,一个20kHz 音频信号能准确地被表示为一个44.1kHz数字信号样本。 工作原理 语音识别系统通常使用统计模型来解释方言,口音,背景噪音和发音不一样。这些模型已经发展到这种程度,在一个平静环境中准确率能够达成90以上。然而每一个企业都有它们自己关于输入处理专题技术,存在着4种关于语音怎样被识别共同主题。 1.基于模板:这种模型应用了内置于程序中语言数据库。当把语音输入到系统中后,识别器利用其与数据库匹配进行工作。为了做到这一点,该程序使用

11、了动态规划算法。这种语音识别技术衰落是因为这个识别模型不足以完成对不在数据库中语音类型了解。 2.基于知识:基于知识语音识别技术分析语音声谱图以搜集数据和制订规则,这些数据和规则回馈与操作者命令和语句等值信息。这种识别技术不适用关于语音语言和语音知识。 3.随机:随机语音识别技术在今天最为常见。随机语音分析方法利用随机概率模型来模拟语音输入不确定性。最流行随机概率模型是HMM (隐马尔科夫模型)。如下所表示: Yt是观察到声学数据,p(W )是一个特定词串先天随机概率,p(YtW )是在给定声学模型中被观察到声学数据概率, W 是假设词汇串。在分析语音输入时候,HMM 被证实是成功,因为该算法

12、考虑到了语言模型,人类说话声音模型和已知全部词汇。 1.联结:在联结主义语音识别技术当中,关于语音输入知识是这么取得,即分析输入信号并从简单多层感知器中用多个方式将其储存在延时神经网络中。 如前所述,利用随机模型来分析语言程序是今天最流行,而且证实是最成功。 识别指令 当今语音识别软件最主要目标是识别指令。这增强了语音软件功效。比如微软Sync 被装进了许多新型汽车里面,听说这能够让使用者进入汽车全部电子配件和免提。这个软件是成功。它问询使用者一系列问题并利用惯用词汇发音来得出语音恒量。这些常量变成了语音识别技术算法中一环,这么以后就能够提供愈加好语音识别。当今技术评论家认为这项技术自20世纪

13、90年代开始已经有了很大进步,不过在短时间内不会取代手控装置。 听写 关于指令识别第二点是听写。就像接下来讨论那样,今天市场看重听写软件在转述医疗统计、学生试卷和作为一个更实用将思想转化成文字方面价值。另外,许多企业看重听写在翻译过程中价值,在这个过程中,使用者能够把他们语言翻译成为信件,这么使用者就能够说给他们母语中另一部分人听。在今天市场上,关于该软件生产制造已经存在。 语句翻译中存在错误 当语音识别技术处理你语句时候,它们准确率取决于它们降低错误能力。它们在这一点上评价标准被称为单个词汇错误率(SWER )和指令成功率(CSR )。当一个句子中一个单词被弄错,那就叫做单个词汇犯错。因为S

14、WERs 在指令识别系统中存在,它们在听写软件中最为常见。指令成功率是由对指令精准翻译决定。一个指令陈说可能不会被完全准确翻译,但识别系统能够利用数学模型来推断使用者想要发出指令。 商业 主要语音技术企业 伴随语音技术产业发展,更多企业带着他们新产品和理念进入这一领域。下面是一些语音识别技术领域领军企业名单(并非全部)NICE Systems(NASDAQ:NICE and Tel Aviv:Nice),该企业成立于1986年,总部设在以色列,它专长于数字统计和归档技术。他们在收入5.23亿美元。欲了解更多信息,请访问 Verint系统企业(OTC :VRNT ),总部设在纽约梅尔维尔,创建于

15、1994年把自己定位为“劳动力优化智能处理方案,IP视频,通讯截取和公共安全设备领先供给商。详细信息,请访问 Nuance企业(纳斯达克股票代码:NUAN )总部设在伯灵顿,开发商业和客户服务使用语音和图像技术。欲了解更多信息,请访问 Vlingo,总部设在剑桥,开发与无线/移动技术对接语音识别技术。 Vlingo最近与雅虎联手合作,为雅虎移动搜索服务一键通功效提供语音识别技术。欲了解更多信息,请访问 在语音技术领域其余主要企业包含:Unisys,ChaCha,SpeechCycle,Sensory,微软Tellme企业,克劳斯纳技术等等。 专利侵权诉讼考虑到这两项业务和技术高度竞争性,各企业

16、之间有过无数次专利侵权诉讼并不奇怪。在开发语音识别设备所包括每个元素都能够作为一个单独技术申请专利。使用已经被另一家企业或个人申请专利技术,即使这项技术是你自己独立研发,你也可能被要求赔偿,并并可能不公正地禁止你以后使用该项技术。语音产业中政治和商业紧紧地与语音技术发展联络在一起,所以,必须认识到可能妨碍该行业深入发展政治和法律障碍。下面是对一些专利侵权诉讼叙述。应该指出,现在有许多这么诉讼立案,许多诉讼案被推上法庭。 语音识别未来发展今后发展趋势和应用 医疗行业 医疗行业有多年来一直在宣传电子病历(EMR) 。不幸是,产业迟迟不能够满足EMRs,一些企业断定原因是因为数据输入。没有足够人员将

17、大量病人信息输入成为电子格式,所以,纸质统计依然盛行。一家叫 Nuance(也出现在其余领域,软件开发者称为龙指令)相信他们能够找到一市场将他们语音识别软件出售那些更喜欢声音而非手写输入病人信息医生。 军事 国防工业研究语音识别软件试图将其应用复杂化而非更有效率和亲切。为了使驾驶员更加快速、方便地进入需要数据库,语音识别技术是现在正在飞机驾驶员座位下面显示器上进行试验。 军方指挥中心一样正在尝试利用语音识别技术在危急关头用快速和简易方式进入他们掌握大量资料库。另外,军方也为了照料病员涉足 EMR 。军方宣告,正在努力利用语音识别软件把数据转换成为病人统计。 附:英文原文 Speech Reco

18、gnition In computer technology, Speech Recognition refers to the recognition of human speech by computers for the performance of speaker-initiated computer-generated functions (e.g., transcribing speech to text; data entry; operating electronic and mechanical devices; automated processing of telepho

19、ne calls) a main element of so-called natural language processing through computer speech technology. Speech derives from sounds created by human articulatory system, including the lungs, vocal cords, and tongue. Through expovariations in speech patterns during infancy, a child learns to recognize t

20、he same words or phrases despite different modes of pronunciation by different people e.g., pronunciation differing in pitch, tone, emphasis, intonation pattern. The cognitive ability of the humans to achieve that remarkable capability. As of this writing (), we can reproduce that capability in comp

21、uters only to a limited degree, but in many ways still useful. The Challenge of Speech Recognition Writing systems are ancient, going back as far as the Sumerians of 6,000 years ago. The phonograph, which allowed the analog recording and playback of speech, dates to 1877. Speech recognition had to a

22、wait the development of computer, however, due to multifa problems with the recognition of speech. First, speech is not simply spoken text-in the same way that Miles Davis playing S can hardly be captured by a note-for-note rendition as sheet music. What humans undeas discrete words, phrases or sent

23、ences with clear boundaries are actually delivered as a continuous stream of sounds: Iwenttothestoreyesterday, rather than I went to the store yesterday. Words can also blend, with Whaddayawa? representing What do you want? Second, there is no one-to-one correlation between the sounds and letters. I

24、n Engli are slightly more than five vowel letters-a, e, i, o, u, and sometimes y and w. Ther than twenty different vowel sounds, though, and the exact count can vary depending accent of the speaker. The reverse problem also occurs, where more than one letter can represent a given sound. The letter c

25、 can have the same sound as the letter k, as in the letter s, as in citrus. In addition, people who speak the same language do not use the same sounds, i.e. lanvary in their phonology, or patterns of sound organization. There are different accents-the word water could be pronounced watter, wadder, w

26、oader, wattah, and so on. Each persa distinctive pitch when they speak-men typically having the lowest pitch, women and children have a higher pitch (though there is wide variation and overlap within each Pronunciation is also colored by adjacent sounds, the speed at which the user is tal even by th

27、e users health. Consider how pronunciation changes when a person has a co Lastly, consider that not all sounds consist of meaningful speech. Regular speech isinterjections that do not have meaning in themselves, but serve to break up discourse and convey subtle information about the speakers feeling

28、s or intentions: Oh, like, you k There are also sounds that are a part of speech that are not considered words: er, um, uh. Coughing, sneezing, laughing, sobbing, and even hiccupping can be a part of what is s And the environment adds its own noises; speech recognition is difficult even for hu noisy

29、 places. History of Speech Recognition Despite the manifold difficulties, speech recognition has been attempted for almost a there have been digital computers. As early as 1952, researchers at Bell Labs had de an Automatic Digit Recognizer, or Audrey. Audrey attained an accuracy of 97 to 99 p if the

30、 speaker was male, and if the speaker paused 350 milliseconds between words, an speaker limited his vocabulary to the digits from one to nine, plus oh, and if th could be adjusted to the speakers speech profile. Results dipped as low as 60 perc recognizer was not adjusted. Audrey worked by recognizi

31、ng phonemes, or individual sounds that were considered dis from each other. The phonemes were correlated to reference models of phonemes that generated by training the recognizer. Over the next two decades, researcherspent large amounts of time and money trying to improve upon this concept, with lit

32、tle success. Computer hardware improved by leaps and bounds, speech synthesis improved steadily, Noam Chomskys idea of generative grammar suggested that language could be analyzed programmatically. None of this, however, seemed to improve the state of the art in speech recognition. Chomsky and Halle

33、s generative work in phonology also led mainstream linguistics to abandon the concept of the phoneme altogether, in favour of breaki the sound patterns of language into smaller, more discrete features. In 1969, John R. Pierce wrote a forthright letter to the Journal of the Acoustical America, where

34、much of the research on speech recognition was published. Pierce was o the pioneers in satellite communications, and an executive vice president at Bell La was a leader in speech recognition research. Pierce said everyone involved was wasti and money. It would be too simple to say that work in speec

35、h recognition is carried out simply one can get money for it. . . .The attraction is perhaps similar to the attraction o turning water into gasoline, extracting gold from the sea, curing cancer, or going to One doesnt attract thoughtlessly given dollars by means of schemes for cutting the cost of so

36、ap by 10%. To sell suckers, one uses deceit and offers glamor. Pierces 1969 letter marked the end of official research at Bell Labs for nearly a d defense research agency ARPA, however, chose to persevere. In 1971 they sponsored a research initiative to develop a speech recognizer that could handle

37、at least 1,000 understand connected speech, i.e., speech without clear pauses between each word. The recognizer could assume a low-background-noise environment, and it did not need to wo real time. By 1976, three contractors had developed six systems. The most successful system, deve by Carnegie Mel

38、lon University, was called Harpy. Harpy wa four-second sentence s slow would have taken more than five minutes to process. It also still required speakers by speaking sentences to build up a reference model. Nonetheless, it did recognize a thousand-word vocabulary, and it did support connected speec

39、h. Research continued on several paths, but Harpy was the model for future success. It used hidden Markov models and statistical modeling to extract meaning from speech. In essspeech was broken up into overlapping small chunks of sound, and probabilistic models inferred the most likely words or part

40、s of words in each chunk, and then the same mo applied again to the aggregate of the overlapping chunks. The procedure is computat intensive, but it has proven to be the most successful. Throughout the 1970s and 1980s research continued. By the 1980s, most researchers were using hidden Markov models

41、, which are behind all contemporary speech recognizers. In latter part of the 1980s and in the 1990s, DARPA (the renamed ARPA) funded several initiatives. The first initiative was similar to the previous challenge: the require a one-thousand word vocabulary, but this time a rigorous performance stan

42、dard was de This initiative produced systems that lowered the word error rate from ten percent percent. Additional initiatives have focused on improving algorithms and improving computational efficiency. In , Microsoft released a speech recognition system that worked with Office XP. Iencapsulated ho

43、w far the technology had come in fifty years, and what the limitations still were. The system had to be trained to a specific users voice, using the works of gre that were provided, such as Edgar Allen Poes Fall of the House of Usher, and Bill Ga Way Forward. Even after training, the system was frag

44、ile enough that a warning was pr If you change the room in which you use Microsoft Speech Recognition and your accur drops, run the Microphone Wizard again. On the plus side, the system did work in rea and it did recognize connected speech. Speech Recognition Today Technology Current voice recogniti

45、on technologies work on the ability to mathematically analyze the sound waves formed by our voices through resonance and spectrum analysis. Computer systems first record the sound waves spoken into a microphone through a digital to analog converter. The analog or continuous sound wave that we produc

46、e when we say a word is sliced up into small time fragments. These fragments are then measured based on their amplitude levels, the level of compression of air released from a persons mouth. To the amplitudes and convert a sound wave to digital format the industry has commonly the Nyquist-Shannon Th

47、eorem. Nyquist-Shannon Theorem The Nyquist Shannon theorem was developed in 1928 to show that a given analog frequenc is most accurately recreated by a digital frequency that is twice the original analogNyquist proved this was true because an audible frequency must be sampled once for compression an

48、d once for rarefaction. For example, a 20 kHz audio signal can be accu represented as a digital sample at 44.1 kHz. How it Works Commonly speech recognition programs use statistical models to account for variations in dialect, accent, background noise, and pronunciation. These models have progressed tan extent that in a quiet environment accuracy of over 90% can be achieved. While every company has their own proprietary technolog

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        获赠5币

©2010-2024 宁波自信网络信息技术有限公司  版权所有

客服电话:4008-655-100  投诉/维权电话:4009-655-100

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :gzh.png    weibo.png    LOFTER.png 

客服