英文翻译及文献_单片机-传感器_压力检测(5篇)

时间:2019-05-14 02:28:50下载本文作者:会员上传
简介:写写帮文库小编为你整理了多篇相关的《英文翻译及文献_单片机-传感器_压力检测》,但愿对你工作学习有帮助,当然你在写写帮文库还可以找到更多《英文翻译及文献_单片机-传感器_压力检测》。

第一篇:英文翻译及文献_单片机-传感器_压力检测

译文

轮胎压力监测在汽车使用被动声表面波传感器

阿尔弗雷德波尔1,G。Ostermayer},L.Reindl 2F.塞弗特 1)应用电子实验室,oETechnology大学,Gusshausstrasse 27,A1739年慕尼黑,德国

摘要:在我们的文件,我们介绍表面声波(SAW)传感器在测量道路车辆的轮胎连续气压的应用。有了这些,在驾驶的每一个阶段可以读出轮胎气压。我们展示了实施原型装置测量轮胎压力,所应用的SAW传感器,改进版本和审讯设置。对实际应用中存在的问题进行了讨论。在测试驾驶发生的时候所测得的轮胎压力就是实验结果。

导言

驾驶汽车时,在运动中因一个轮胎爆胎的轮胎故障可能会导致严重事故,危及人的生命。此外,现在的汽车制造商试图挽救汽车备胎。它的成本通常只有重量和空间,因此产生较高的油耗,虽然这将需要在汽车十余年的生命里不少于一次维护。这只能在驾驶期间测得轮胎气压。目前使用的传感器含有活性成分,采用锂电池。这些传感器组件的质量大约是20克,造成高动态负载。几年前,远距离无线声表面波器件传感器被发明。使用的是一个SAW延迟线连接到天线,射频信号的审讯注入和传感器响应,重复传输无线审讯。这些传感器能测量

温度,机械负荷,力和位移等的好处是,声表面波传感器是完全无源器件,并包含没有电力供应,也没有半导体。即使在恶劣的环境下,它们的温度高达几百度,其寿命远远长于的电池供电,车辆强烈的电磁污染是由点火系统传感器运作产生的危害风险。首先,我们讨论压力测量采用声表面波传感器与无线目前某些类型的传感器组件和审查我们讨论实施到本实验简要总结的内容。

声表面波压力传感器

电气被动声表面波压力传感器始终是一个端口延迟线多个反射或各自独立的谐振器。在延迟,审讯传送突发信号,读写器发送一个脉冲信号,每一个反射安排在基板的表面延迟差两个或两个以上的反应信号测量一些物理值,参数转化为改变传感器的表面长度或表面声波的速度,分别延迟里的反应一比长度李声表面波传播的基板表面和繁殖。损害传感器测量精度的传感器的反应遵守延迟响应信号源于硅反射镜可以通过拉伸和压缩收集载入传感器,用于无线测量的扭矩等,声表面波传感器的基板可以会影响传感器弯曲膜,把边缘传感器固定在传感器装到弯曲由于转移的中心,膜加载的另一方面传感器可直接安装在膜或压电膜代表的SAW基板可以显示这些方法。

图1 :

一)膜转换转变弯曲的声表面波传感器(锯)二)声表面波传感器安装在膜

下一步是覆盖膜传感器组成的 串行制造的传感器系统〜有别小得多,能够被纳入,我们实施了综合压力室(图2)进入传感器固定在轮辋,金属阀轴用作为传感器的天线(图4)。

图4 :集成压力室固定在轮辋阀用作天线

为了提高执行只安装阀门传感器装置(图5)。即使是在高速行驶的时候,该单位的总质量只有几克,动态负载很小。

图5 :压力传感器阀轴

审讯系统采用基于空间的多样性,以区分传感器低于每一辆翼天线要使用同轴电缆,这个技术是困难和昂贵的,我们的调查也显示双绞线的适用性。

图6 :讯问汽车天线

为了测量,我们开发了一个小型的审讯系统的传输阵和寻找的响应信号阵阵之间的相移。该系统是由一个片上微控制器控制和一个液晶显示屏上显示测量结果。系统的照片在图7所示

图7 :系统的无线审讯的被动声表面波传感器

为了检验我们的传感器和我们的系统,我们做了很多测试车领域内和周围的城市审讯系统耦合到笔记本计算机压力值进行测量和记录档案。数字和 高胎压属于制动演习(传感器是安装东亚前轮)时间越长增加了轮胎压力和下列期限衰变是因为骑了系统表现出较高的可靠性,即使是在暴风雪里驾驶汽车。

图8:在不同的驾驶条件下的轮胎压力

在图9中的压力右前轮可以看到放大的时间,而通过两个轨道平面交叉口与相邻的水通道跨越的破旧安排的平面交叉口,硬冲击传达给身体造成硬压力冲击的轮胎。

图9:轮胎压力过级过境两个轨道

讨论

无线声表面波传感器的审讯都是免维护和承受高的热和机械载荷。在测量性能与竞争对手。在汽车系统中所付出的努力是对SAW传感器更高,因为有源传感器单位发送包含压力值和传感器识别的SAW器件的主要优势为前提的数字信息。应用程序,每次旋转的发生,是他们的低质量。在离心力的作用MV*/ R,与质量为m,速度v和半径r.To动态机械负荷最小化,系统应用到旋转部件的质量,应尽可能低。而传统轮胎压力测量传感器的单位有一个大约的质量。20克,集成压力传感器(图5)有一个不到一克的质量。在最坏的情况下完整的SAW传感器单元的质量只有几克。传统的系统是由锂离子电池供电。在一个破旧轮胎的情况下,由于电池不能被选中,应更换传感器,产生废物处置的问题。在汽车电子集成系统,它是不必要的,以显示每个轮胎的压力不断。在这里,只有一个故障触发警报。系统的显示,可以取消,降低系统的成本。

结论

无源声表面波传感器的优点是它们适合用于车辆应用。特别是测量轮胎气压低质量和他们不用维护的事实,使他们能够比竞争对手优越。压力测量表面波传感器,轮胎的磨损程度和审讯系统将被讨论。大量测量驾驶的实验结果将被表示出来。参考文献

[1]Reindl,F.穆勒,塞弗特,无源表面波传感器的讯问,国际专利应用(1992年)。

[2]塞弗特楼。机械传感器基于表面声波,传感器(1994)231-239 [3] 绍尔,T.奥斯特塔格,L.Reindl,H.谢尔,0.Sczesny,U.沃尔夫,无线声表面波传感器的远程测控的物理参数,商业电台传感器和通信技术,1997年。pp.51-58 [ 4 ] H ·谢尔,G.舍尔,F.塞弗特,R.威格尔,石英压力传感器反射基于声表面波延迟线,Proc.IEEE超声波研讨会1996年。pp.347-350。

第二篇:单片机英文翻译

微机发展简史

第一台存储程序的计算开始出现于1950前后,它就是1949年夏天在剑桥大学,我们创造的延迟存储自动电子计算机(EDSAC)。

最初实验用的计算机是由象我一样有着广博知识的人构造的。我们在电子工程方面都有着丰富的经验,并且我们深信这些经验对我们大有裨益。后来,被证明是正确的,尽管我们也要学习很多新东西。最重要的是瞬态一定要小心应付,虽然它只会在电视机的荧幕上一起一个无害的闪光,但是在计算机上这将导致一系列的错误。

在电路的设计过程中,我们经常陷入两难的境地。举例来说,我可以使用真空二级管做为门电路,就象在EDSAC中一样,或者在两个栅格之间用带控制信号的五级管,这被广泛用于其他系统设计,这类的选择一直在持续着直到逻辑门电路开始应用。在计算机领域工作的人都应该记得TTL,ECL和CMOS,到目前为止,CMOS已经占据了主导地位。

在最初的几年,IEE(电子工程师协会)仍然由动力工程占据主导地位。为了让IEE 认识到无线工程和快速发展的电子工程并行发展是它自己的一项权利,我们不得不面对一些障碍。由于动力工程师们做事的方式与我们不同,我们也遇到了许多困难。让人有些愤怒的是,所有的IEE出版的论文都被期望以冗长的早期研究的陈述开头,无非是些在早期阶段由于没有太多经验而遇到的困难之类的陈述。

60年代的巩固阶段

60年代初,个人英雄时代结束了,计算机真正引起了重视。世界上的计算机数量已经增加了许多,并且性能比以前更加可靠。这些我认为归因与高级语言的起步和第一个操作系统的诞生。分时系统开始起步,并且计算机图形学随之而来。

综上所述,晶体管开始代替正空管。这个变化对当时的工程师们是个不可回避的挑战。他们必须忘记他们熟悉的电路重新开始。只能说他们鼓起勇气接受了挑战,尽管这个转变并不会一帆风顺。

小规模集成电路和小型机

很快,在一个硅片上可以放不止一个晶体管,由此集成电路诞生了。随着时间的推移,一个片子能够容纳的最大数量的晶体管或稍微少些的逻辑门和翻转门集成度达到了一个最大限度。由此出现了我们所知道7400系列微机。每个门电路或翻转电路是相互独立的并且有自己的引脚。他们可通过导线连接在一起,作成一个计算机或其他的东西。

这些芯片为制造一种新的计算机提供了可能。它被称为小型机。他比大型

机稍逊,但功能强大,并且更能让人负担的起。一个商业部门或大学有能力拥有一台小型机而不是得到一台大型组织所需昂贵的大型机。

随着微机的开始流行并且功能的完善,世界急切获得它的计算能力但总是由于工业上不能规模供应和它可观的价格而受到挫折。微机的出现解决了这个局面。

计算消耗的下降并非起源与微机,它本来就应该是那个样子。这就是我在概要中提到的“通货膨胀”在计算机工业中走上了歧途之说。随着时间的推移,人们比他们付出的金钱得到的更多。

精简指令计算机的诞生

早期的计算机有简单的指令集,随着时间的推移,商业用微机的设计者增加了另外的他们认为可以微机性能的特性。很少的测试方法被建立,总的来说特性的选取很大程度上依赖于设计者的直觉。

1980年,RISC运动改变了微机世界。该运动是由Patterson 和 Ditzel发表了一篇命名为精简指令计算机的情况论文而引起的。

除了RISC这个引人注目缩略词外,这个标题传达了一些指令集合设计的见解,随之引发了RISC运动。从某种意义上说,它推动了线程的发展,在处理器中,同一时间有几个指令在不同的执行阶段称为线程。线程不是个新概念,但是它对微机来说是从未有过的。

RISC受益于一个最近的可用的方法的诞生,该方法使估计计算机性能成为可能而不去真正实现该微机的设计。我的意思是说利用目前存在的功能强大的计算机去模拟新的设计。通过模拟该设计,RISC的提倡者能够有信心的预言,一台使用和传统计算机相同电路的RISC计算机可以和传统的最好的计算机有同样的性能。

模拟仿真加快了开发进度并且被计算机设计者广泛采用。随后,计算机设计者变的多些可理性少了一些艺术性。今天,设计者们希望有满屋可用计算机做他们的仿真,而不只是一台,X86指令集

除非出现很大意外,要不很少听到有计算机使用早期的RISC指令集了。INTEL 8086及其后裔都与x86密切相关。X86构架已经占据了计算机核心指令集的主导地位。被认为是相当成功的RISC指令集现在的生存空间越来越小了。

对于我们这些从事计算机学术研究的人,X86的统治地位让我们感到失望。毫无疑问,商业上对于x86的生存会有更多的考虑,但是这里还有很多原因,尽管我们多么希望人们考虑其他的方面。高级语言并没有完全消除对机器原始编码的的使用。我们仍需要不断提醒我们自己:我们应该严格的与先前的应用

在机器层面上保持兼容。然而,情况也许有所不同,如果Intel的主要目的是为是生产一个好的RISC芯片。有一个已经取得了更大的成功,我所说的i860(不是i960,它们有一些不同)。从许多方面来说,i860是个卓越的芯片,但是它的软件借口不适合在工作站上应用。

对于x86取得胜利的最后有一件有意思的事情。直接应用先前x86的实现方式对于满足RISC处理器的持续增长的速度要求,是不可能的。因此,设计者们没有完全实现RISC指令集,尽管这不是很明显。表面上,一片现代的x86芯片包含了隐藏实现的部分,好象和实现RISC指令集的芯片一样。当致命的异常发生时,X86引入的代码是,经过适当的篡改后,被转化为它的内部代码并且被RISC芯片处理。

对于以上RISC运动的总结,我非常信赖最新版本的哈里斯和培生出版社的有关计算机设计的书籍。请参考特殊计算机体系构造,第三版,2003,P146,151-4,157-8 IA-64指令集

很久以前,Intel 和 Hewlett-Packard引进了IA-64指令集。这最初主要是为了满足通常的64位地址空间问题。在这种情况下,随后出现了MIPS R4000和Alpha。然而,人们普遍认为Intel应该与x86构架保持兼容,可令人疑惑的是恰恰相反。

进一步说,IA-64的设计与其他所有的指令集在主要实现方式上有所不同。特别的,每条指令它需要附加的6位。这打乱了传统的在指令字长和信息内容的平衡,并且它改变了编译器作者的原先的大纲。

尽管IA-64是个全新的指令集,但Intel发表了一个令人困惑的声明:基于IA-64的芯片将与早期的x86芯片保持兼容。很难弄懂它所指的是什么。

最新的称为Itaninu IA-64处理器显然需要特殊的兼容性的硬件,尽管如此,x86编码运行的相当慢。

由于以上的复杂因素,IA-64的实现需要更大的体积相对与传统的指令集,这暗示着更大的消耗。因此,在任何情况下,作为常识和一般性的标准,Gordon Moore在访问剑桥最近开放的Betty and Gordon Moore 图书馆时所反复强调。在听到他说问题出现在Intel内部也许有所不同,我很不理解。但是我已经作好了准备,去接受这样的事实,我已经完全不了解半导体经济学了。

AMD已经定义了一种64位的与x86更加兼容的指令集,并且他们已经取得了进展。这种片子并不是很大。很多人认为这才是Intel应该做的。(在这篇演讲稿被提交之前,Intel表示他们将销售一系列本质上与AMD兼容的芯片)

更小晶体管的出现

集成度还在不断增加,这是通过缩小原始晶体管以致可以更容易放在一个片子上。进一步说,物理学的定律占在了制造商的一方。晶体管变的更快,更简单,更小。因此,同时导致了更高的集成度和速度。

这有个更明显的优势。芯片被放在硅片上,称为晶片。每一个晶片拥有很大数量的独立芯片,他们被同时加工然后分离。因为缩小以致在每块晶片上有了更多的芯片,所以每块芯片的价格下降了。

单元价格下降对于计算机工业是重要的,因为,如果最新的芯片性能和以前一样但价格更便宜,就没有理由继续提供老产品,至少不应该无限期提供。对于整个市场只需一种产品。

然而,详细计算各项消耗,随着芯片小到一定程度,为了继续保持产品的优势,移到一个更大的圆晶片上是十分必要的。尺寸的不断增加使的圆晶片不再是很小的东西了。最初,圆晶片直径上只有1到2英寸,到2000年已经达到了12英寸。起初,我不太明白,芯片的缩小导致了一系列的问题,工业上应该在制造更大的圆晶片上遇到更多的问题。现在,我明白了,单元消耗的减少在工业上和在一个芯片上增加电子晶体管的数量是同等重要的,并且,在风险中增加圆晶片厂的投资被证明是正确的。

集成度被特殊的尺寸所衡量,对于特定的技术,它是用在一块高密度芯片上导线间距离的一半来衡量的。目前,90纳米的晶片正在被建成。

对Murphy‟s定理的怀疑 1997年3月,在Cavendish实验室建立一百周年纪念庆典上,Gordon Moore被邀作为一名演讲者。在他演讲的过程中,我第一次了解到这样一个事实,我们可以使得硅芯片既快并且消耗低,从而违反在英国被称为Murphy‟s 定律或 Sod‟s 定律。Moore说在其它领域你也许不在二者之间做出取舍,但事实上,在硅片上,同时拥有二者是可能的。

在网上可得到一本相关的书籍,Murphy是在美国空军中从事人体重力加速度研究的工程师。然而在我们的学生时代就已经相当熟悉该定律,当时我们对于该定律有个更接近散文的名字而不是上面我们提到的那两个名字,我们称为General Cussedness定律。甚至它都曾出现在我们的试卷上。问题是这样,第一部分是关于该定律的定义,第二部分是应用该定律解决一道问题。我们的试题是:

一、给出General Cussedness定律的定义;

二、当一个骑自行车人围绕着圆做运动时,在任何情况下,考虑到风的因素得到一个平衡公式。

单片机

芯片每次的缩小,芯片数量将减少;并且芯片间的导线也随之减少。这导

致了整体速度的下降,因为信号在各个芯片间的传输时间变长了。

渐渐地,芯片的收缩到只剩下处理器部分,缓存都被放在了一个单独的片子上。这使得工作站被建成拥有当代小型机一样的性能,结果搬倒了小型机绝对的基石。正如我们所知道的,这对于计算机工业和从事计算机事业的人产生了深远的影响

自从上述时代的开始,高密度CMOS硅芯片成为主导。随着芯片的缩小技术的发展,数百万的晶体管可以放在一个单独的片子上,相应的速度也成比例的增加。

为了得到额外的速度。处理器设计者开始对新的体系构架进行实验。一次成功的实验都预言了一种新的编程方式的分支的诞生。我对此取得的成功感到非常惊奇。它导致了程序执行速度的增加并且其相应的框架。

同样令人惊奇的是,通过更高级的特性建立一种单片机是有可能的。例如,为IBM Model 91开发的新特性,现在在单片机上也出现了。

Murphy定律仍然在中止的状态。它不再适用于使用小规模集成芯片设计实验用的计算机,例如7400系列。想在电路级上做硬件研究的人们没有别的选择除了设计芯片并且找到实现它的办法。一段时间内,这样是可能的,但是并不容易。

不幸的是,制造芯片的花费有了戏剧性的增长,主要原因是制造芯片过程中电路印刷版制作成本的增加。因此,为制作芯片技术追加资金变的十分困难,这是当前引起人们关注的原因。

半导体前景规划

对于以上提到的各个方面,在部分国际半导体工业部门的精诚合作下,广泛的研究与开发工作是可行的。

在以前美国反垄断法禁止这种行为。但是在1980年,该法律发生了很大变化。预竞争概念被引进了该法律。各个公司现在可以在预言竞争阶段展开合作,然后在规则允许的情况下继续开发各自的产品。

在半导体工业中,预竞争研究的管理机构是半导体工业协会。1972年作为美国国内的组织,1998年成为一个世界性的组织。任何一个研究组织都可加入该协会。

每两年,SIA修订一次ITRS(国际半导体科学规划),并且逐年更新。1994年在第一卷中引入了“前景规划”一词,该卷由两个报告组成,些于1992年,在1993年提交。它被认为是该规划的真正开始。

为了推动半导体工业的向前发展,后续的规划提供最好的可利用的工业标准。它们对于15年内的发展做出了详细的规划。要达到的目标是每18个月晶体管的集成度增加一倍,同时每块芯片的价格下降一半,即Moore定律。

对于某些方面,前面的道路是清楚的。在另一方面,制造业的问题是可以预见的并且解决的办法也是可以知道的,尽管不是所有的问题都能够解决。这样的领域在表格中由蓝色表示,同时没有解决办法的,加以红色。红色区域往往称为红色砖墙。

规划建立的目标是现实的,同时也是充满挑战的。半导体工业整体上的进步于该规划密不可分。这是个令人惊讶的成就,它可以说是合作和竞争共同的价值。

值得注意的是,促进半导体工业向前发展的主要的战略决策是相对开放的预竞争机制,而不是闭关锁国。这也包括大规模圆晶片取得进展的原因。

1995年前,我开始感觉到,如果达到了不可能使得晶体管体积更小的临界点时,将发生什么。怀着这样的疑惑,我访问了位于华盛顿的ARPA(美国国防部)指挥总部,在那,我看到1994年规划的复本。我恍然大悟,当圆晶片尺寸在2007年达到100纳米时,将出现严重的问题,在2010年达到70纳米时也如此。在随后的2004年的规划中,当圆晶片尺寸达到100纳米时,也做了相应的规划。不久半导体工业将发展到那一步。

从1994年的规划中我引用了以上的信息,还有就是一篇提交到IEE的题目为CMOS终结点的论文和在1996年2月8号的Computing上讨论的一些题目。

我现在的想法是,最终的结果是表示一个存在可用的电子数目从数千减少到数百。在这样的情况下,统计波动将成为问题。最后,电路或者不再工作,或者达到了速度的极限。事实上,物理限制将开始让他们感觉到不能突破电子最终的不足,原因是芯片上绝缘层越来越薄,以致量子理论中隧道效应引起了麻烦,导致了渗漏。

相对基础物理学,芯片制造者面对的问题要多出许多,尤其是电路印刷术遇到的困难。2001年更新2002年出版的规划中,陈述了这样一种情况,照目前的发展速度,如果在2005年前在关键技术领域没有取得大的突破的话,半导体业将停止不前。这是对“红色砖墙”最准确的描述。到目前为止是SIA遇到的最麻烦的问题。2003年的规划书强调了这一点,通过在许多地方加上了红色,指示在这些领域仍存在人们没有解决的制造方法问题。

到目前为止,可以很满意的报道,所遇到的问题到及时找到了解决之道。规划书是个非凡的文档,并且它坦白了以上提到的问题,并表示出了无限的信心。主要的见解反映出了这种信心并且有一个大致的期望,通过某种方式,圆

晶体将变的更小,也许到45纳米或更小。

然而,花费将以很大的速率增长。也许将成为半导体停滞不前的最终原因。对于逐步增加的花费直到不能满足,这个精确的工业上达到一致意见的平衡点,依赖于经济的整体形势和半导体工业自身的财政状况。

最高级芯片的绝缘层厚度仅有5个原子的大小。除了找到更好的绝缘材料外,我们将寸步难行。对于此,我们没有任何办法。我们也不得不面对芯片的布线问题,线越来越细小了。还有散热问题和原子迁移问题。这些问题是相当基础性的。如果我们不能制作导线和绝缘层,我们就不能制造一台计算机。不论在CMOS加工工艺上和半导体材料上取得多么大的进步。更别指望有什么新的工艺或材料可以使得半导体集成度每18个月翻一番的美好时光了。

我在上文中说到,圆晶体继续缩小直到45纳米或更小是个大致的期望。在我的头脑中,从某点上来说,我们所知道的继续缩小CMOS是不可行的,但工业上需要超越它。

2001年以来,规划书中有一部分陈述了非传统形式CMOS的新兴研究设备。一些精力旺盛的人和一些投机者的探索无疑给了我们一些有益的途径,并且规划书明确分辨出了这些进步,在那些我们曾经使用的传统CMOS方面。

内存技术的进步

非传统的CMOS变革了存储器技术。直到现在,我们仍然依靠DRAM作为主要的存储体。不幸的是,随着芯片的缩小,只有芯片外围速度上的增长——处理器芯片和它相关的缓存速度每两年增加一倍。这就是存储器代沟并且是人们焦虑的根源。存储技术的一个可能突破是,使用一种非传统的CMOS管,在计算机整体性能上将导致一个很大的进步,将解决大存储器的需求,即缓存不能解决的问题。

也许这个,而不是外围电路达到基本处理器的速度将成为非传统CMOS.的最终角色。

翻译:

Progress in Computers The first stored program computers began to work around 1950.The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds.We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead.This proved true,although we had some new things to learn.The most important of these was that transients must be treated correctly;what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarass de richess.For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere.This sort of choice persisted and the term families of logic came into use.Those who have worked in the computer field will remember TTL, ECL and CMOS.Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right.I remember that we had some difficulty in organising a conference because the power engineers‟ ways of doing things were not our ways.A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice Consolidation in the 1960s

By the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest.The number of computers in the world had increased and they were much more reliable than the very early ones.To those years we can ascribe the first steps in high level languages and the first operating systems.Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes.This change presented a formidable challenge to the engineers of the day.They had to forget what they knew about circuits and start again.It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits.As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops.This led to a range of chips known as the 7400 series.The gates and flip flops were independent of one another

and each had its own pins.They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible.It was called a minicomputer.It was something less that a mainframe, but still very powerful, and much more affordable.Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful.The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost.Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer;it had always been that way.This was what I meant when I referred in my abstract to inflation in the computer industry „going the other way‟.As time goes on people get more for their money, not less.The RISC Movement and Its Aftermath

Early computers had simple instruction sets.As time went on designers of commercially available machines added additional features which they thought would improve performance.Few comparative measurements were done and on the whole the choice of features depended upon the designer‟s intuition.In 1980, the RISC movement that was to change all this broke on the world.The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time.Pipelining was not new, but it was new for small computers

The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it.I refer to the use of a powerful existing computer to simulate the new design.By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology.This

prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers.In consequence, computer design has become more of a science and less of an art.Today, designers expect to have a roomful of, computers available to do their simulations, not just one.They refer to such a roomful by the attractive name of computer farm.The x86 Instruction Set

Little is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86.This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field.No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well.However much we research oriented people would like to think otherwise.high level languages have not yet eliminated the use of machine code altogether.We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained.Nevertheless, things might have been different if Intel‟s major attempt to produce a good RISC chip had been more successful.I am referring to the i860(not the i960, which was something different).In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set.It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past.Instead, designers took a leaf out of the RISC book;although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding.The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson‟s books on computer design as my supporting authority;see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set.This was primarily intended to meet a generally recognised need for a 64 bit address space.In this, it followed the lead of the designers of the MIPS R4000 and Alpha.However one would have thought that Intel would have stressed compatibility with the x86;the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets.In particular, it needs 6 extra bits with each instruction.This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips.It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility.Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets.This in turn implies a higher cost.Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library.I have, however, heard it said that the matter appears differently from within Intel.This I do not understand.But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it.The chip is not a particularly large one.Some people think that this is what Intel should have done.[Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.]

The Relentless Drive towards Smaller Transistors

The scale of integration continued to increase.This was achieved by shrinking the original transistors so that more could be put on a chip.Moreover, the laws of physics were on the side of the manufacturers.The transistors also got faster, simply by getting smaller.It was therefore possible to have, at the same time, both high density and high speed.There was a further advantage.Chips are made on discs of silicon, known as wafers.Each wafer has on it a large number of individual chips, which are processed together and later separated.Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely.There can thus be one product for the entire market.However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers.The increase in the size of wafers was no small matter.Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches.At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers.I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology.At the present time, production of 90 nm chips is still building up Suspension of Law

In March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory.It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy‟s law.or Sod‟s law as it is usually called in the UK.Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them.In fact, in the case of silicon chips, it is possible to have both.In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949.However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness.We even had a mock examination question in which the law

featured.It was the type of question in which the first part asks for a definition of some law or principle and the second part contains a problem to be solved with the aid of it.In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour.Derive an equation giving the direction of the wind at any time.The single-chip computer

At each shrinkage the number of chips was reduced and there were fewer wires going from one chip to another.This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip.This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead.As we all know, this had severe consequences for the computer industry and for the people working in it.From the above time the high density CMOS silicon chip was Cock of the Roost.Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.Processor designers began to experiment with new architectural features designed to give extra speed.One very successful experiment concerned methods for predicting the way program branches would go.It was a surprise to me how successful this was.It led to a significant speeding up of program execution and other forms of prediction followed Equally surprising is what it has been found possible to put on a single chip computer by way of advanced features.For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputers

Murphy‟s Law remained in a state of suspension.No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series.People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made.For a time, this was possible, if not easy

Unfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips.It has, in consequence, again

become very difficult to finance the making of research chips, and this is a currently cause for some concern.The Semiconductor Road Map

The extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry.At one time US monopoly laws would probably have made it illegal for US companies to participate in such an effort.However about 1980 significant and far reaching changes took place in the laws.The concept of pre-competitive research was introduced.Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association(SIA).This has been active as a US organisation since 1992 and it became international in 1998.Membership is open to any organisation that can contribute to the research effort.Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors(ITRS), with an update in the intermediate years.The first volume bearing the title „Roadmap‟ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward.They set out in great detail.over a 15 year horizon.the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore‟s law is to be maintained.-and if the cost per chip is to fall.In the case of some items, the way ahead is clear.In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out;these areas are coloured yellow in the tables.Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red.Red areas are referred to as Red Brick Walls.The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely.This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner.It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors.These include the progression to larger wafers.By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller.My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994.This made it plain that serious problems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010.The year for which the coming of 100 nm(or rather 90 nm)was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred.At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster.In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography.In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “.This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one.The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered.The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence.Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less.However, costs will rise steeply and at an increasing rate.It is cost that will ultimately be seen as the reason for calling a halt.The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。

Insulating layers in the most advanced chips are now approaching a thickness equal to that of 5 atoms.Beyond finding better insulating materials, and that cannot take us very far, there is nothing we can do about this.We may also expect to face problems with on-chip wiring as wire cross sections get smaller.These will concern heat dissipation and atom migration.The above problems are very fundamental.If we cannot make wires and insulators, we cannot make a computer, whatever improvements there may be in the CMOS process or improvements in semiconductor materials.It is no good hoping that some new process or material might restart the merry-go-round of the density of transistors doubling every eighteen months.I said above that there is a general expectation that shrinkage would continue by one means or another to 45 nm or even less.What I had in mind was that at some point further scaling of CMOS as we know it will become impracticable, and the industry will need to look beyond it.Since 2001 the Roadmap has had a section entitled emerging research devices on non-conventional forms of CMOS and the like.Vigorous and opportunist exploitation of these possibilities will undoubtedly take us a useful way further along the road, but the Roadmap rightly distinguishes such progress from the traditional scaling of conventional CMOS that we have been used to.Advances in Memory Technology

Unconventional CMOS could revolutionalize memory technology.Up to now, we have relied on DRAMs for main memory.Unfortunately, these are only increasing in speed marginally as shrinkage continues, whereas processor chips and their associated cache memory continue to double in speed every two years.The result is a growing gap in speed between the processor and the main memory.This is the memory gap and is a current source of anxiety.A breakthrough in memory technology, possibly using some form of unconventional CMOS, could lead to a major advance in overall performance on problems with large memory requirements,that is, problems which fail to fit into the cache.Perhaps this, rather than attaining marginally higher basis processor speed will be the ultimate role for non-conventional CMOS.

第三篇:单片机英文翻译

单片机翻译

本文所研究的作息时间控制系统是由MCS-51系列单片机AT89S51做主控部件,外围电路用12MHZ晶体震荡器、74LS164寄存器、复位电路、三个按键、四联LED数码管做显示时间的器件,不需要外扩展存储器,就能实现其功能。在整个设计中,主要用的是单片机的自动控制原理,包括硬件和软件。在硬件部分,包括继电器,存储器和显示器接口芯片;软件部分,主要是主程序设计。将软、硬件有机地结合在一起,使得系统能够正确地进行计时。在系统的调试中,首先对硬件进行调试,然后逐级叠加调试;软件先在最小系统板上调试,确保其工作正常之后,再与硬件系统联调。最后将各模块组合后整体测试,使系统的所有功能得以实现。文中介绍了单片机AT89S51的主要特性及各管脚的说明,并讲解了74LS164内部功能,在此基础上展开了设计。

The rest of this article time control system is composed of MCS-51 Microcontroller AT89S51 do the main control unit, the external circuit with a 12MHZ crystal oscillator, 74LS164 registers, reset circuit, three buttons, LED digital tube to do quadruple the time of the device shows no the needs of extended memory, can achieve its functions.Throughout the design, mainly using a single chip control theory, including hardware and software.In hardware, including relays, memory and display interface chip;software components, mainly the main program design.The software and hardware together organically, making the system able to correctly time.Debugging the system, the first hardware debugging, and then progressively overlay debugging;software debugging board first in the minimum system to ensure its working properly, the FBI again and the hardware system.Finally, the overall test after each module combination, the system of all functions can be achieved.The paper describes the main features of the microcontroller AT89S51 description and the pin, and explained the 74LS164 internal functions, on this basis to start the design.

第四篇:开题报告:基于单片机的压力检测系统设计

基于单片机的压力检测系统的设计

题目要求:(包括主要技术参数)

本课题是基于单片机的压力的测量与显示系统。要求通过压力传感器将压力转换成电信号,再经过运算放大器进行信号放大,送至8位A/D转换器,然后将模拟信号转换成单片机可以识别的数字信号,再经单片机转换成LED显示器可以识别的信息,最后显示输出。而在显示的过程中通过键盘,向计算机系统输入各种数据和命令,让单片机系统处于预定的功能状态,实时显示需要的值。且要求系统具有较强的抗干扰能力。主要技术参数为:量程:0~500kg 综合精度:±0.25%kg 响应时间:≦10ms 本课题研究的目的与意义:

在煤炭工业、制药、冶金、制造、钢铁、供水、化工等行业中,压力是生产过程中的重要参数之一。并且随着现代化工业的发展,工厂大多增加自动化生产线,提供生产效率,降低成本,以提高市场竞争力和适应现代化工业的应用,而对于压力检测或控制是保证生产和设备安全运行必不可少的条件。所以压力检测技术的改进与发展历来受到众多行业的高度重视。

传统的传感器大都采用手工操作,特别是压力传感器,基本都是手动油压或气压标定。鉴于此,选择压力传感器作为前端检测元件,以单片机作为检测仪的新型的,成本低廉的,使用方便的压力检测系统的研制,则显得十分有意义,以期克服原有检测仪的不足。国内外研究现状: 二十世纪80年代中后期,随着集成电路、微型计算机及软件技术的发展,在智能仪器的基础上又出现了虚拟仪器,它们都含有计算机,但在性能特点上又有新的飞跃,使压力信号采集与控制、信号分析与处理和结果的表达输出全部由计算机完成。现在通信从原来的模拟技术实现了到数字技术转变,特别是网络技术的发展,使异地实时测量成为现实。当前世界发达国家都高度重视和支持仪器仪表的发展,美国国家长期安全和经济繁荣至关重要的22项技术中有6项与传感器信息处理技术直接相关,日本科学技术厅把测量传感器技术列为21世纪首位发展的技术,德国大面积推广应用自动化测控仪器系统,20世纪90年代6年就增加了350%的市场,保证了劳动生产率增长1.9%,欧共体制定第三个科技发展总

体规划,将测量和检测技术列为15个专项之一。目前,美国Paroscientific公司、DH公司、Mensor公司、英国Druck公司都推出了准确度不低于O.OI%FS年稳定性为0.01%FS的具有双向通讯和模拟数字输出功能的高精度数字化仪表。

国内自动矿山压力监测系统起步较晚,但发展迅速。60年代停留在多点巡回检测阶段,从70年代开始研制以小型计算机为中心的数据采集、处理系统,自80年代起,由于微型计算机在我国得到广泛应用,自动检测技术得到逐步提高。目前,我国虽也在研制智能仪表(仪器),但是很多都是引进国外现有设备,软件上进行二次开发应用。在压力监测技术的发展方面,其压力传感器技术、信号调理技术、高速数据采集和数据处技术获得了飞速发展,而且动态测试技术,总线技术和网络技术在压力测试和控制中得到了越来越广泛的应用。新型的智能传感器,可以把敏感元件、信号调理、A/D转换、微处理器、通过现场总线或网络传送给主计算机。

虽然我国的压力监测系统起步晚,总体来说还落后于发达国家。但是矩形双岛膜结构的6000Pa量程微压传感器,其性能指标有了很大提高,如非线性为5×10qFS,滞后、重复性均小于5×10qFS,分辨率优于20Pa,过压保护范围大于20倍量程。对量程为lOOkPa的压力传感器,非线性、滞后、重复性均优于5xlO—FS。硅一兰宝石、高温硅压力传感器的工作温度分别达到一50,-.-,300。C和0"---'400℃。压敏器件的可靠性已达到较好水平。元器件的品种增多,测压范围已拓展,已有微压、表压、高压、绝对压力、差压等力敏元件及其配套仪表问世。因此随着我国传感器技术等各方面技术的快速发展,我国的压力监测系统也将会取得长足的进步。拟采取的研究路线:

第一步:将根据本设计的研究目的,调查文献来获得资料,从而全面的、正确的了解掌握所要研究的问题。系统的掌握控制器的开发设计过程,相关的电子技术和传感器技术等,完成设计任务和功能的描述。

第二步:研究文献后结合本课题的研究目的,提出设计方案,借助计算机和各种方法技术,减少或消除各种可能影响结果的无关因素的干扰。完成系统设计方案的论证和总体设计。第三步:以自己的感官和辅助工具去直接观察和分析系统需求,从全局考虑,完成硬件和软件资源分配和规划,分别完成系统的硬件设计和软件设计。

第四步:利用C语言编写软件程序,用仿真软件绘制电路图,完成硬件测试,软件调试和软硬件的联调。

第五步:实物制作。进度安排:

第 1~ 2 周:通过网络和图书馆,进行资料查询与下载,整理出所需文献。第 3~ 4 周:查阅文献,理清研究思路,书写开题报告。第 5~ 8 周:设计系统的硬件和软件。

第 9~12周: 深入研究课题内容,完成系统硬件设计及软件调试。第 1 3周: 进一步完善实物。第 14~15周:做研究总结,撰写论文。第 1 6周:准备论文答辩 文献综述:

基于单片机的压力检测系统的设计

1. 前言

压力是工业生产过程中的重要参数之一。压力的检测或控制是保证生产和设备安全运行必不可少的条件。实现智能化压力检测系统对工业过程的控制具有非常重要的意义。本设计主要通过单片机及专用芯片对传感器所测得的模拟信号进行处理,使其完成智能化功能。2.主题:

本课题是基于单片机的压力的测量与显示系统。要求通过压力传感器将压力转换成电信号,再经过运算放大器放大后送至A/D转换器,转换后发往单片机,单片机进行数据处理并发往 LED显示。而在显示过程中可以通过键盘,向计算机系统输入各种数据和命令,让单片机系统处于预定的功能状态,实时显示需要的值。

可见系统整体结构应该包括:传感器模块、放大器模块、A/D采集模块、单片机模块、按键模块和显示模块。2.1传感器模块

选择:压力传感器是压力检测系统中的重要组成部分,由各种压力敏感元件将被测压力信号转换成容易测量的电信号作输出。力学传感器的种类繁多,如电阻应变片压力传感器、半导体应变片压力传感器、压阻式压力传感器、电感式压力传感器、电容式压力传感器谐振式压力传感器等。【1】

而电阻应变式传感器具有结构简单、体积小、使用方便、性能稳定、可靠、灵敏度高、适合静态及动态测量、测量精度高等诸多优点,因此是目前应用最广泛的传感器之一。

电阻应变式传感器由弹性元件和电阻应变片构成,当弹性元件感受到物理量时,其表面产生应变,粘贴在弹性元件表面的电阻应变片的电阻值将随着弹性元件的应变而相应变化。通过测量电阻应变片的电阻值变化,可以用来测量位移加速度、力、力矩、压力等各种参数。

测量电路:应变片测量应变是通过敏感栅的电阻相对变化而得到的。电阻相对变化很小,用一般测量电阻的仪表很难直接测出来,所以必须用专门的电路测量这种微弱的电阻变化。最常用的电路为直流和交流电桥电路。【2】 2.2 放大器的选择

被测的非电量经传感器得到的电信号幅度很小,无法进行A/D转换,必须对这些模拟电信号进行放大处理【3】。为使电路简单便于调试,本设计采用三运算放大器,因为在具有较大共模电压的条件下,仪表放大器能够对很微弱的差分电压信号进行放大,并且具有很高的输入阻抗。这些特性使其受到众多应用的欢迎,广泛用于测量压力和温度的应变仪电桥接口、热电耦温度检测和各种低边、高边电流检测。2.2 A/D转换器选择

A/D转换器的任务是将模拟量转换成数字量。本次设计的中A/D转换器的任务是将放大器输出的模拟信号转换为数字量进行输出。

串行和并行接口模式是A/D转换器诸多分类中的一种,但却是应用中器件选择的一个重要指标。在同样的转换分辨率及转换速度的前提下,不同的接口方式会对电路结构及采用周期产生影响。【4】对A/D转换器的选择我们通过比较ADC0809和ADC0832来决定。这两个转换器都是常见的A/D转换器,其中ADC0809的并行接口A/D转换器,ADC0832是串行接口A/D转换器。我们所做的设计选择ADC0832,A/D转换在单片机接口中应用广泛 ,串行 A/D转换器具有功耗低、性价比较高、芯片引脚少等特点。2.4控制器选择

控制器选择AT89C51单片机。目前国外使用较多的微控制器是以51内核拓展出的单片机【5】,51单片机的使用已经发展到一个很高的层次,编程多以c语言为主,操作简单,用途广泛,易于控制。

AT89C51片内含8k bytes的可反复擦写的Flash只读程序存储器和256 bytes的随机存取数据存储器(RAM),有40个引脚,32个外部双向输入/输出(I/O)端口,同时内含2个外中

断口,3个16位可编程定时计数器,2个全双工串行通信口,2 个读写口线,AT89C52可以按照常规方法进行编程,也可以在线编程。其将通用的微处理器和Flash存储器结合在一起,特别是可反复擦写的 Flash存储器可有效地降低开发成本。

2.5 LED显示电路

在应用系统中,如果需要显示的内容只有数码和某些字母,使用LED数码管是一种较好的选择。LED数码管显示清晰、成本低廉、配置灵活,接口简单易行。

LED数码管是由发光二极管作为显示字段的数码型显示器件。LED数码管按电路中的连接方式可以分为共阴型和共阳型两大类 LED数码管编码方式:

当LED数码管与单片机相联时,一般将LED数码管的各笔段引脚a、b、„、g、Dp按某一顺序接到AT89C51型单片机某一个并行I/O口D0、D1、„、D7,当该I/O口输出某一特定数据时,就能使LED数码管显示出某个字符。例如要使共阳极LED数码管显示“0”,则a、b、c、d、e、f各笔段引脚为低电平,g和Dp为高电平。【6-8】 2.6 键盘

键盘是单片机系统实现人机对话的常用输入设备。操作员通过键盘,向计算机系统输入各种数据和命令,亦可通过使用键盘,让单片机系统处于预定的功能状态。键盘按照其内部不同电路结构,可分为编码键盘和非编码键盘二种【9-10】。编码键盘本身除了带有普通按键之外,还包括产生键码的硬件电路。使用时,只要按下编码键盘的某一个键,硬件逻辑会自动提供被按下的键的键码,使用十分方便,但价格较贵。由非编码键盘组成的简单硬件电路,仅提供各个键被按下的信息,其他工作由软件来实现。由于价格便宜,而且使用灵活,因此广泛应用在单片机应用系统中。总结:

综上所述,整个系统选用AT89C51作为主控制器,应用电阻应变式传感器配以电桥电路采集压力参数,然后通过三运算放大器将采集到的微弱信号放大,并传送给ADC0832转换器件,将模拟信号转换为单片机可识别的数字信号后发送给单片机,51单片机将信息分析处理后发往LED数码管显示。系统选用非编码键盘,可在显示数据的同时向计算机系统输入各种数据和命令。整个系统结构紧凑、成本低,可实现性高。参考文献:

[1]杨帆.传感器技术[M].西安:西安电子科技大学出版社.2008 [2]安盼龙,赵瑞娟.电阻应变式传感器的研究[D]物理与工程.2010 [3]张靖,刘少强.检测技术与系统设计[M] 北京:中国电力出版社 2002 [4]朱彩霞.基于AT89C51单片机A/D转换电路的研究[J]淮阴工学院学报.2011.01 [5] 胡汉才.单片机原理及其接口技术[M] 北京:清华大学出版社 1996 [6] 宫贤令主编.自动显示技术.北京:冶金工业出版社,1989.17.[7]何利民编著.单片机应用系统设计.北京[M]:北京航天航空大学出版社,1994 [8] Yongxian Song ,Yuan Feng, Juanli Ma ,Xianjin Zhang.Design of LED Display Control System Based on AT89C52 Single Chip Microcomputer[J] JOURNAL OF COMPUTERS, VOL.6, NO.4, APRIL2011 [9] 何立民.MCS-51系列单片机应用系统设计系统配置与接口技术 [M] 北京航空航天大学出版社 1990 [10]林毅。基于AT89C51单片机构成的键盘显示电路[J]现代电子技术.2006.13

第五篇:单片机温度传感器论文_图文.

毕业设计(论文)答辩记录表 学生姓名 所学专业 指导老师 答辩教师提问 性 别 论文题目 答辩小 组成员 学生回答问题情况 班 级 答 辩 记 录 指 导 教 师 评 语 指导老师(签名): 年 月 日 21 初评成绩(由指导老师填写)答辩主持人(签名): 年 月 日 毕业设计(论文)评价表 毕业 设计(论 文)评语 答辩 评语 评 定 等 级 答辩成员签名 年 月 日 22 答辩委员会 主任意见 签字 年 月 日 23

下载英文翻译及文献_单片机-传感器_压力检测(5篇)word格式文档
下载英文翻译及文献_单片机-传感器_压力检测(5篇).doc
将本文档下载到自己电脑,方便修改和收藏,请勿使用迅雷等下载。
点此处下载文档

文档为doc格式


声明:本文内容由互联网用户自发贡献自行上传,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任。如果您发现有涉嫌版权的内容,欢迎发送邮件至:645879355@qq.com 进行举报,并提供相关证据,工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。

相关范文推荐

    传感器原理及检测技术

    传感器原理及检测技术 (工程硕士)考试题1、简要说明非电量电测法的基本思想。 2、简要说明传感元件与敏感元件的作用及区别。 3、简述现代测量系统由那几部分组成及各部分的功......

    压力测量仪 单片机课程设计(最终五篇)

    目 录 第 1章 课程设计简介 ...................................................................................... 1 1.1 设计要求 ......................................

    文献检测规定

    关于启用“学术不端行为文献检测系统”的暂行办法 (2010年9月) 根据国务院学位委员会办公室《关于在学位授予工作中加强学术道德和学术规范建设的意见》的要求,为进一步加强我......

    中小学教师职业压力文献综述

    中小学教师工作压力文献综述 概述: 经众多文献的调查研究的数据显示,中小学教师由于其特殊的工作地位所受到的工作压力是巨大的,特别是在国内的教育政策下,这一现象也日益引起人......

    传感器与检测技术论文

    光电传感器--太阳能电池板 太阳能电池板是利用光生伏特效应原理制造的。在光线作用下能够使物体产生一定方向的电动势的现象叫做光生伏特效应。基于该效应的光电器件有光电......

    检测与传感器知识点总结

    第一章 1.传感器的功能:信息收集,信号数据的转换 2.传感器的组成:传感器通常由敏感元件、转换元件、调解转换电路3部分组成 3.衡量传感器静态特性的重要指标是线性度、灵敏度、......

    传感器与检测技术总结

    《传感器与检测技术》总结 姓名:王婷婷 学号:14032329 班级:14-11传感器与检测技术 这学期通过学习《传感器与检测技术》,懂得了很多,以下是我对这本书的总结。 第一章 概 述 传......

    《汽车传感器检测》培训教案

    《汽车传感器检测》培训教案 主讲教师: 荔波职校汽修专业部 1 课 题汽车传感器检测 4课时 教学目的: 1.了解汽车传感器检测的内容。 2.熟悉汽车传感器检测的方法。教学重点:......