第一篇:本科毕业论文中英文翻译--Wireless Communications无线通信
Wireless Communications
by
Joshua S.Gans, Stephen P.King and Julian Wright
1.Introduction
In 1895, Guglielmo Marconi opened the way for modern wireless communications by transmitting the three-dot Morse code for the letter „S‟ over a distance of three kilometers using electromagnetic waves.From this beginning, wireless communications has developed into a key element of modern society.From satellite transmission, radio and television broadcasting to the now ubiquitous mobile telephone, wireless communications has revolutionized the way societies function.This chapter surveys the economics literature on wireless communications.Wireless communications and the economic goods and services that utilise it have some special characteristics that have motivated specialised studies.First, wireless communications relies on a scarce resource – namely, radio spectrum – the property rights for which were traditionally vested with the state.In order to foster the development of wireless communications(including telephony and broadcasting)those assets were privatised.Second, use of spectrum for wireless communications required the development of key complementary technologies;especially those that allowed higher frequencies to be utilised more efficiently.Finally, because of its special nature, the efficient use of spectrum required the coordinated development of standards.Those standards in turn played a critical role in the diffusion of technologies that relied on spectrum use.In large part our chapter focuses on wireless telephony rather than broadcasting and other uses of spectrum(e.g., telemetry and biomedical services).Specifically, the economics literature on that industry has focused on factors driving the diffusion of wireless telecommunication technologies and on the nature of network pricing regulation and competition in the industry.By focusing on the economic literature, this chapter complements other surveys in this Handbook.Hausman(2002)focuses on technological and policy developments in mobile telephony rather than economic research per se.Cramton(2002)provides a survey of the theory and practice of spectrum auctions used for privatisation.Armstrong(2002a)and Noam(2002)consider general issues regarding network interconnection and access pricing while Woroch(2002)investigates the potential for wireless technologies as a substitute for local fixed line telephony.Finally, Liebowitz and Margolis(2002)provide a general survey of the economics literature on network effects.In contrast, we focus here solely on the economic literature on the mobile telephony industry.The outline for this chapter is as follows.The next section provides background information regarding the adoption of wireless communication technologies.Section 3 then considers the economic issues associated with mobile telephony including spectrum allocation and standards.Section 4 surveys recent economic studies of the diffusion of mobile telephony.Finally, section 5 reviews issues of regulation and competition;in particular, the need for and principles behind access pricing for mobile phone networks.2.Background
* Marconi‟s pioneering work quickly led to variety of commercial and government(particularly military)developments and innovations.In the early 1900s, voice and then music was transmitted and modern radio was born.By 1920, commercial radio had been established with Detroit station WWJ and KDKA in Pittsburgh.Wireless telegraphy was
first used by the British military in South Africa in 1900 during the Anglo-Boer war.The British navy used equipment supplied by Marconi to communicate between ships in Delagoa Bay.Shipping was a major early client for wireless telegraphy and wireless was standard for shipping by the time the Titanic issued its radio distress calls in 1912.Early on, it was quickly recognized that international coordination was required for wireless communication to be effective.This coordination involved two features.First, the potential for interference in radio transmissions meant that at least local coordination was needed to avoid the transmission of conflicting signals.Secondly, with spectrum to be used for international communications and areas such as maritime safety and navigation, coordination was necessary between countries to guarantee consistency in approach to these services.This drove government intervention to ensure the coordinated allocation of radio spectrum.2.1 Spectrum Allocation
Radio transmission involves the use of part of the electromagnetic spectrum.Electromagnetic energy is transmitted in different frequencies and the properties of the energy depend on the frequency.For example, visible light has a frequency between 4×10and 7.5×10Hz.Ultra violet radiation, X-rays and gamma rays have higher frequencies(or equivalently a shorter wave length)while infrared radiation, microwaves and radio waves have lower frequencies(longer wavelengths).The radio frequency spectrum involves electromagnetic radiation with frequencies between 3000 Hz and 300 GHz.Even within the radio spectrum, different frequencies have different properties.As Cave(2001)notes, the higher the frequency, the shorter the distance the signal will travel, but the greater the capacity of the signal to carry data.The tasks of internationally coordinating the use of radio spectrum, managing interference and setting global standards are undertaken by the International Telecommunication Union(ITU).The ITU was created by the International Telecommunications Convention in 1947 but has predecessors dating back to approximately 1865.It is a specialist agency of the United Nations with over 180 members.The Radiocommunication Sector of the ITU coordinates global spectrum use through the Radio Regulations.These regulations were first put in place at the 1906 Berlin International Radiotelegraph Conference.Allocation of the radio spectrum occurs along three dimensions – the frequency, the geographic location and the priority of the user with regards to interference.The radio spectrum is broken into eight frequency bands, ranging from Very Low Frequency(3 to 30 kHz)up to Extremely High Frequency(30 to 300 GHz).Geographically, the world is also divided into three regions.The ITU then allocates certain frequencies for specific uses on either a worldwide or a regional basis.Individual countries may then further allocate frequencies within
1the ITU international allocation.For example, in the United States, the Federal Communications Commission‟s(FCC‟s)table of frequency allocations is derived from both the international table of allocations and U.S.allocations.Users are broken in to primary and secondary services, with primary users protected from interference from secondary users but not vice versa.As an example, in 2003, the band below 9 kHz was not allocated in the international or the U.S.table.9 to 14 kHz was allocated to radio navigation in both tables and all international regions while 14 to 70 kHz is allocated with both maritime communications and fixed wireless communications as primary users.There is also an international time signal at 20kHz.But the U.S.table also adds an additional time frequency at 60 kHz.International regional distinctions begin to appear in the 70 to 90 kHz range with differences in use and priority between radio navigation, fixed, radiolocation and maritime mobile uses.These allocations continue right up to 300GHz, with frequencies above 300 GHz not allocated in the United States and those above 275 GHz not allocated in the international table.The ITU deals with interference by requiring member countries to follow notification and registration procedures whenever they plan to assign frequency to a particular use, such as a radio station or a new satellite.2.2 The range of wireless services
Radio spectrum is used for a wide range of services.These can be broken into the following broad classes:
• Broadcasting services: including short wave, AM and FM radio as well as terrestrial television;
• Mobile communications of voice and data: including maritime and aeronautical mobile for communications between ships, airplanes and land;land mobile for communications between a fixed base station and moving sites such as a taxi fleet and paging services, and mobile communications either between mobile users and a fixed network or between mobile users, such as mobile telephone services;
• Fixed Services: either point to point or point to multipoint services;
• Satellite: used for broadcasting, telecommunications and internet, particularly over long distances;
• Amateur radio;
• Other Uses: including military, radio astronomy, meteorological and scientific uses.The amount of spectrum allocated to these different uses differs by country and frequency band.For example, in the U.K., 40% of the 88MHz to 1GHz band of frequencies are used for TV broadcasting, 22% for defense, 10% for GSM mobile and 1% for maritime communications.In contrast, none of the 1GHz to 3 GHz frequency range is used for television, 19% is allocated to
5GSM and third-generation mobile phones, 17% to defense and 23% for aeronautical radar.The number of different devices using wireless communications is rising rapidly.Sensors and embedded wireless controllers are increasingly used in a variety of appliances and applications.Personal digital assistants(PDAs)and mobile computers are regularly connected to e-mail and internet services through wireless communications, and wireless local area networks for computers are becoming common in public areas like airport lounges.However, by far the most important and dramatic change in the use of wireless communications in the past twenty years has been the rise of the mobile telephone.2.3 The rise and rise of mobile telephony
The history of mobile telephones can be broken into four periods.The first(pre-cellular)period involved mobile telephones that exclusively used a frequency band in a particular area.These telephones had severe problems with congestion and call completion.If one customer was using a particular frequency in a geographic area, no other customer could make a call on that same frequency.Further, the number of frequencies allocated by the FCC in the U.S.to mobile telephone services was small, limiting the number of simultaneous calls.Similar systems, known as A-Netz and B-Netz were developed in Germany.The introduction of cellular technology greatly expanded the efficiency of frequency use of mobile phones.Rather than exclusively allocating a band of frequency to one telephone call in a large geographic area, a cell telephone breaks down a geographic area into small areas or cells.Different users in different(non-adjacent)cells are able to use the same frequency for a call without interference.First generation cellular mobile telephones developed around the world using different, incompatible analogue technologies.For example, in the 1980s in the U.S.there was the Advanced Mobile Phone System(AMPS), the U.K.had the Total Access Communications System(TACS), Germany developed C-Netz, while Scandinavia developed the Nordic Mobile Telephone(NMT)system.The result was a wide range of largely incompatible systems, particularly in Europe, although the single AMPS system was used throughout the U.S.Second generation(2G)mobile telephones used digital technology.The adoption of second generation technology differed substantially between the United States and Europe and reverses the earlier analogue mobile experience.In Europe, a common standard was adopted, partly due to government intervention.Groupe Speciale Mobile(GSM)was first developed in the 1980s and was the first 2G system.But it was only in 1990 that GSM was standardized(with the new name of Global System for Mobile communication)under the auspices of the European Technical Standards Institute.The standardized GSM could allow full international roaming, automatic location services, common encryption and relatively high quality audio.GSM is now the most widely used 2G system worldwide, in more than 130 countries, using the 900 MHz frequency range.In contrast, a variety of incompatible 2G standards developed in the United States.These include TDMA, a close relative of GSM, and CDMA, referring to Time and Code Division Multiple Access respectively.These technologies differ in how they break down calls to allow for more efficient use of spectrum within a single cell.While there is some argument as to the „better‟ system, the failure of the U.S.to adopt a common 2G standard, with the associated benefits in terms of roaming and switching of handsets, meant the first generation AMPS system remained the most popular mobile technology in the U.S.throughout the 1990s.The final stage in the development of mobile telephones is the move to third generation(3G)technology.These systems will allow for significantly increased speeds of transmission and are particularly useful for data services.For example, 3G phones can more efficiently be used for e-mail services, and downloading content(such as music and videos)from the internet.They can also allow more rapid transmission of images, for example from camera phones.An attempt to establish an international standard for 3G mobile is being moderated through the ITU, under the auspices of its IMT-2000 program.IMT-2000 determined that 3G technology should be based on CDMA systems but there are(at least two)alternative competing systems and IMT-2000 did not choose a single system but rather a suite of approaches.At the ITU‟s World Radiocommunication Conference in 2000, frequencies for IMT-2000 systems were allocated on a worldwide basis.By 2002, the only 3G system in operation was in Japan, although numerous companies have plans to roll out 3G systems in the next few years.The growth in use of mobile telephones has been spectacular.From almost a zero base in the early 1980s, mobile penetration worldwide in 2002 is estimated at 15.57 mobile phones per 100 people worldwide.Of course, the level of penetration differs greatly between countries.In the United States, there were 44.2 mobile telephones per 100 inhabitants, with penetration rates of 60.53 in France, 68.29 in Germany, 77.84 in Finland and 78.28 in the United Kingdom.Thus, in general mobile penetration is lower in the U.S.than in the wealthier European countries.Outside Europe and the U.S., the penetration rate in Australia is 57.75, 62.13 in New Zealand, and 58.76 in Japan.Unsurprisingly, penetration rates depend on the level of economic development, so that India had only 0.63 mobile telephones per 100 inhabitants in 2002, with 1.60 for Kenya, 11.17 for China, and 29.95 for Malaysia.The number of mobile phones now exceeds the number of fixed-wire telephone lines in a variety of countries including Germany,France, the United Kingdom, Greece, Italy and Belgium.However, the reverse holds, with fixed-lines outnumbering mobiles in the United States, Canada, and Argentina.Penetration rates were close to equal in Japan in 2001, but in all countries, mobile penetration is rising much faster than fixed lines.The price for mobile phone services are difficult to compare between countries.In part this reflects exchange rate variations, but more importantly pricing packages and the form of pricing differs significantly between countries.Most obviously, different countries have different charging mechanisms, with „calling party pays‟ dominating outside the United States.But in the United States and Canada „receiving party pays‟ pricing often applies for calls to mobile telephones.Different packages and bundling of equipment and call charges also make comparisons difficult.A major innovation in mobile telephone pricing in the late 1990s was the use of pre-paid cards.This system, where customers pay in advance for mobile calls rather than being billed at a later date, has proved popular in many countries.For example, in Sweden, pre-paid cards gained 25% of the mobile market within two years of their introduction(OECD, 2000, p.11).Despite the changing patterns of pricing, the OECD estimates that there was a 25% fall in the cost of a representative „bundle‟ of mobile services over its member countries between 1992 and 1998(OECD, 2000, p.22).83.Economic Issues in Wireless Communications
3.1 Spectrum as a scarce resource Radio spectrum is a natural resource, but one with rather unusual properties.As noted above, it is non-homogeneous, with different parts of the spectrum being best used for different purposes.It is finite in the sense that only part of the electromagnetic spectrum is suitable for wireless communications, although both the available frequencies and the carrying capacity of any transmission system depend on technology.The radio spectrum is non-depletable;using spectrum today does not reduce the amount available for use in the future.But it is non-storable.Under ITU guidance, spectrum has been allocated to specific uses and then assigned to particular users given the relevant use.Traditionally, user assignment was by government fiat.Not infrequently, the user was government owned.Privatizations in the 1980s and 1990s, and the success of(at least limited)mobile telephone competition in some countries, resulted in a more arms-length process of spectrum allocation developing in the 1990s.Users of radio spectrum, and particularly users of 2G and 3G mobile telephone spectrum, have generally been chosen by one of two broad approaches since the early 1990s – a „beauty contest‟ or an auction.A „beauty contest‟ involves potential users submitting business plans to the government(or its appointed committee).The winners are then chosen from those firms submitting plans.There may be some payment to the government by the winners, although the potential user most willing to pay for the spectrum need not be among the winners.For example, the U.K.used a beauty contest approach to assign 2G mobile telephone licenses in the 1990s.Sweden and Spain have used beauty contests to assign 3G licenses.France used a beauty contest to assign four 3G licenses.The national telecommunications regulator required firms to submit applications by the end of January 2001.These applications were then evaluated according to preset criteria and given a mark out of 500.Criteria included employment(worth up to 25 points), service offerings(up to 50 points)and speed of deployment(up to 100 points).Winning applicants faced a relatively high license fee set by the government.As a result, there were only two applicants.These firms received their licenses in June 2001, with the remaining two licenses unallocated(Penard, 2002).The concept of using a market mechanism to assign property rights over spectrum and to deal with issues such as interference goes back to at least the 1950s when it was canvassed by Herzel(1951)and then by Coase(1959).But it was more than thirty years before spectrum auctions became common.New Zealand altered its laws to allow spectrum auctions in 1989 and in the early 1990s auctions were used to assign blocks of spectrum relating to mobile telephones, television, radio broadcasting and other smaller services to private management(Crandall, 1998).In August 1993, U.S.law was modified to allow the FCC to use auctions to assign radio spectrum licenses and by July 1996 the FCC had conducted seven auctions and assigned over 2,100 licenses(Moreton and Spiller, 1998).This included the assignment of two new 2G mobile telephone licenses in each region of the U.S.through two auctions.In 2000, the U.K.auctioned off five 3G licenses for a total payment of approximately $34b.Auctions have involved a variety of formats including „second price sealed bid‟ in New
10Zealand, modified ascending bid in the U.S.and a mixed ascending bid and Dutch auction format in the U.K.11 Bidders may have to satisfy certain criteria, such as service guarantees and participation deposits, before they can participate in the auctions.Limits may also be placed on the number of licenses a single firm can win in a particular geographic area, so that the auction does not create a monopoly supplier.From an economic perspective, using an auction to assign spectrum helps ensure that the spectrum goes to the highest value user.While auctions have been used to assign spectrum to different users, they still involve a prior centralized allocation of bands of spectrum to particular uses.Economically, this can lead to an inefficient use of spectrum.A user of a particular frequency band(e.g.for 3G services)might have a much higher willingness-to-pay for neighboring spectrum than the current user of that neighboring spectrum(e.g.a broadcaster or the military).But the prior allocation of frequency bands means that these parties are unable to benefit from mutually advantageous trade.It would violate the existing license conditions to move spectrum allocated to one use into another use even if this is mutually advantageous.Building on the work of Coase(1959), Valletti(2001)proposes a system of tradable spectrum rights, using the market to both allocate spectrum to uses and simultaneously assign it to users.Interference can be dealt with through the assignment of property rights and negotiation between owners of neighboring spectrum.Valletti notes that both competition issues and issues of mandated standards would need to be addressed in a market for spectrum rights.We deal with the issue of standards later in this section while competition issues are considered in section 5 below.Noam(1997)takes the concept of tradable spectrum assignment one stage further.Technological advancements, such as the ability for a signal to be broken into numerous separate digital packets for the purposes of transmission and then reassembled on reception, means that the concept of permanent spectrum assignment may become redundant in the near future.As technology advances, Noam argues, spot and forward markets can be used to assign use within designated bands of spectrum.The price of spectrum use would then alter to reflect congestion of use.DeVany(1998)also discusses market-based spectrum policies, including the potential for a future “open, commoditized, unbundled spectrum market system.”(p.641)
Conflicts in the allocation of spectrum allocation arose in the FCC auctions in the U.S.The 1850-1910 MHz and 1930-1990MHz bands to be allocated by these auctions already had private fixed point-to-point users.The FCC ruled that existing users had a period of up to three years to negotiate alternative spectrum location and compensation with new users.If negotiations failed, the existing user could be involuntarily relocated.Cramton, Kwerel and Williams(1998)examine a variety of alternative „property rights‟ regimes for negotiated reallocation of existing spectrum and conclude that the experience of the U.S.reallocations is roughly consistent with simple bargaining theory.While economists have generally advocated the assignment of spectrum by auction, auctions are not without their critics.Binmore and Klemperer(2002)argue that a number of the arguments against auctions are misguided.But both Noam(1997)and Gruber(2001b)make the criticism that spectrum auctions automatically create a non-competitive oligopoly environment.Gruber argues that technological change has generally increased the efficiency of spectrum use and increased the viability of competition in wireless services.For example, in terms of spectral efficiency, GSM mobile telephone services are approximately four to thirty times more efficient than earlier analogue systems(Gruber, 2001b, Table 1).An auction of spectrum rights, however, is preceded by an allocation of spectrum.The government usually allocates a fixed band of spectrum to the relevant services.Further, the government usually decides on the number of licenses that it will auction within this band.So the price paid at the auction and the level of ex post competition in the relevant wireless services are determined by the amount of spectrum and the number of licenses the government initially allocates to the service.While the auction creates competition for the scarce spectrum, it does not allow the market to determine the optimal form of competition.Noam argues that flexibility of entry needs to be provided by the assignment system in order to overcome the artificial creation of a non-competitive market structure.3.2 Complementarities in spectrum use Using spectrum to produce wireless communications services can lead to synergies between services and between geographic regions.In the U.K., 3G spectrum auction, the potential synergies between 2G and 3G mobile telephone infrastructure was noted by Binmore and Klemperer:
[T]he incumbents who are already operating in the 2G telecom industry enjoy a major advantage over potential new entrants ….Not only are the incumbents‟ 2G businesses complementary to 3G, but the costs of rolling out the infrastructure(radio masts and the like)necessary to operate a 3G industry are very substantially less than those of a new entrant, because they can piggyback on the 2G infrastructure.(2002, p.C80)Thus, there are synergies in terms of being able to supply new products to an existing customer base using existing brands, and economies of scope between 2G and 3G services.Geographic synergies are evident from the FCC 2G auctions.Moreton and Spiller(1998)examine the two 1995-96 mobile phone auctions in the U.S.They run a reduced-form regression on the winning bid for each license and a number of factors designed to capture the demographics of the relevant license area, the competitive and regulatory environment, and the effects of any synergies.These were ascending bid auctions so that the winning price is approximately equal to the second-to-last bidder‟s valuation for the license.As such, the relevant synergies relate to the network of the second-to-last bidder, to capture any effect of this network on the value of that bidder.To capture the effect of geographic synergies, Moreton and Spiller assume that the expected network associated with any bidder is the same as the actual post-auction network.They categorize geographic synergies as either „local‟ or „global‟.Local synergies consider the relationship between value of a license in one area and ownership of 2G licenses in neighboring geographic areas.Global synergies look at the total extent of the second-to-last bidder‟s national network.Moreton and Spiller find strong evidence of local synergies.“At the local level, our results indicate that groups of two or more adjacent licenses were worth more to asingle bidder than to separate bidders.”(p.711)These local synergies appear to fall rapidly as the geographic area covered by adjacent licenses increases and evidence of global synergies is weak.Local coverage by existing cellular services tended to reduce the price paid for 2G licenses in the Moreton and Spiller study.This appears to run counter to the Binmore and Klemperer argument for economies of scope between different mobile telephone services.Moreton and Spiller argue that the negative relationship may reflect a reduction in competition.Firms are reluctant to bid strongly against existing analogue mobile telephone incumbents and prefer to use their limited resources elsewhere.This argument, however, is weak.In an ascending bid auction, participants will bid up to their own valuations and if there are positive synergies between existing analogue mobile services and 2G services, this should raise the value of the license to the second-to-last bidder regardless of any other parties bids.As expected, Moreton and Spiller find that the value of a 2G license increases with market population and population growth rate and decreases with the size of the area served.These results are broadly consistent with Ausubel, et.al.(1997)and are intuitive.Population and demand are likely to be positively correlated so that for any given level of competition, increased population will tend to increase expected profits.But increased geographic region tends to raise the roll-out cost of the 2G cellular network for any population size, lowering expected profits.The Moreton and Spiller study find some evidence that those jurisdictions where regulators require tariff filing for the existing analogue mobile phone networks tend to have higher values for the 2G licenses.This suggests that tariff filing on existing services may have an anti-competitive effect leading to higher prices overall.The potential anti-competitive effects associated with regulatory notification and publication of price information has been shown in other industries.无线通信
约书亚圣甘斯,斯蒂芬.金和朱利安赖特著
1.Introduction 概述 1895年,Guglielmo Marconi 利用电磁波把3位摩尔斯电码编码的字母S传递到3公里以外的地方,为现代无线通信指明了道路。至此,无线通信就慢慢成为了现代社会的重要组成部分。从卫星传输,广播电视,到现在无处不在的移动电话,无线通信已经使社会发生了革命性的变化。
这一章主要是概述经济文献对无线通信的研究现状。
无线通信为了支持无线通信的发展(包括电话和广播),私有化应运而生。其次,基于无线通信的频谱使用要求互补性技术的发展,尤其是及其在经济活动应用中的一些特点刺激了关于无线通信的专门研究。首先,无线通信是建立在一种稀有资源之上的,即,无线电频谱,而它往往为国家所掌控。要能使那些高频率的频谱得到更有效的利用。最后,因为其本身的特性,高效率的频谱使用又要求标准的协调发展。这些标准反过来又大大推动了基于频谱使用的技术的扩散。这一章,我们的重点是无线电话,而不是广播或其他种类的频谱使用技术(例如,遥测和生物医学服务)而且,这些文献主要关注的是,在这个产业内,推动无线通信技术扩散的因素和网络定价的规则及其竞争力。
通过关注这些经济文献,本章补充说明了整个手册中其他调查中的一些问题。Hausman主要研究的是移动电话技术和政策的发展而不是经济研究本身。Cramton侧重于,在频谱私有化过程中,频谱拍卖的一些理论与实践。Armstrong 和Noam 主要研究网络互连和接入定价的有关问题,与此同时Woroch关注的则是无线技术取代本地固定电话的可能性。而Liebowitz and Margolis则侧重于网络效应。而我们主要是侧重于研究移动电话产业的一些经济文献。
本章的结构如下:
第二部分主要是采用无线通信技术的背景信息。
第三部分讨论的是与移动电话有关的一些经济问题,包括频谱分配和标准。第四部分是最近关于移动电话普及的经济学研究。
最后,第五部分关注的是规则和竞争力的问题,尤其是,移动电话网络接入定价的需求和原则。
2.Background 背景
Marconi 开创性的工作很快就引起了一系列商业和政府部门(尤其是军事)的发展和变革。20世纪初,声音和音乐先后开始能够通过电磁波传递,而且现代收音机诞生。1920年,商用无线电站,底特律WWJ和匹兹堡KDKA建立。无限电报也于1900在盎格鲁-布尔战争中由在南非的英国军队首先使用。英国海军在迪拉果阿湾的船只上使用由Marconi提供的设备相互联系。船只是无线电报早期最重要的客户。到1912年,当铁达尼号使用无线电发出求救信号的时候,船只的无线电标准已经形成。在此之前,人们已经认识到了,要发挥无线通信的作用必须加强国际合作。合作包括两个方面。首先是无线电传输中可能的干扰,也就是说至少人们要通过协调以避免信号冲突。其次,由于频谱在诸如海上安全和导航等领域的应用牵扯到不同国家,因此必须通过国际协调来保证这些服务的畅通。这就要求政府干预来协调无线电频谱的合理分配。2.1 频谱分配
无线电传输涉及电磁频谱的使用。电磁能量通过不同的频率传输,而且能量的特点也取决于频率。例如,可见光的频率在4×10到7.5×10Hz之间。Ultra紫外射线,X—射线和伽玛射线频率较高(或波长教短),而红外辐射,微波和无线电波频率教低(波长较长)。无线电频谱中的电磁射线的频率在3000 Hz —300 GHz 之间。即使在同一无线电频谱中,不同的频率也具有不同的性能。Cave曾经说过,频率越高,信号传播的距离越短,容纳的数据越多。
国际间协调无线电频谱使用,防止干扰和制定全球标准的任务是由国际电信联盟承担的,它成立于1947年,前身可以追述到1865年左右,是联合国下属的一个特殊机构,由180多个成员国组成。其中的无线电通信部按照无线电规则协调全球的频谱使用。这些规则是在1906年的柏林国际无线电报会议上第一次设立的。无线电频谱的分配需要考虑三方面的因素——频率,地理位置和用户优先权(为了防止干扰,先到先得)。无线电频谱被分为八个频段,从最低(3 —30 kHz)到最高(30 —300 GHz)。整个世界被分为三个地区。ITU就在全球或地区性的基础上分配一定的频段。然后,不同国家在ITU分配的基础之上进一步分配本国的频段。例如,在美国,联邦通信委员会频率分配表就是基于国际和美国国内的频率分配表共同形成的。用户被分为主要和次要两个部分,主要部分的用户免受次要用户的干扰,反之不成立。例如,2003年,9 kHz以下的波段在国际上和美国都没有被分配。无论国际的还是美国的分配表,抑或者是国际上所有地区,9 — 14 kHz都被用于无线电导航,而14 — 70 kHz则用于海上通信和固定无线通信,且为主要用户。虽然国际上已经有了一个时间信号,为20kHz。但美国又增加了一个额外时间频率,60 kHz。从70 —90 kHz波段,国际上地区间的差异开始出现。在无线电导航,无线电定位和水上移动用途中用途和优先权都不相同。依此类推,频谱分配一直到300GHz,300GHz以上的频段在美国没有被分配,而国际上没有被分配的则是275 GHz。ITU为预防相互干扰要求各成员国在采用某一频率工作时,例如无线电基站或新的卫星,必须遵守公示和登记程序。
2.2 无线服务的应用范围
无线电频谱有着广泛的用途,可以分为以下几个方面:广播服务:包括短波,AM和FM电台以及地面电视 语音和数据的移动通信:包括用于船舶和飞行器上的海上和航空移动通信,陆上固定基站与类似出租车无线电和寻呼机等不定点之间的移动通信,移动用户与固网或移动用户之间的移动通信,例如移动电话。
固定服务:点对点或一点对多点服务
卫星:用于广播,电信和因特网,特别是长距离通信 业余无线电
其他用途:包括军事,射电天文,气象和科学用途
实现以上功能所使用频谱数量因为国家和波段的不同而有所不同。
例如,在英国从88MHz 到1GHz之间这一波段,其中40%用于电视广播,22%用于国防,10%用于GSM移动通信,1%用于海上通信。相反,从1GHz 到3 GHz这一波段,没有安排电视通信,19%用于GSM通信和3G移动电话,17%用于国防,23%用于航空雷达。
不同种类的无线通信设备的数量正在急剧增加。传感器和嵌入式无线控制器越来越多地用于一系列电器和应用工具上。PDA和笔记本电脑也越来越多的通过无线通信收发邮件和接入因特网,象机场贵宾休息室这样可供电脑无线上网的地方也不在希奇。尽管如此,在过去二十年中,无线通信最重要也是最剧烈的变化还是移动电话的出现。
2.3 移动电话的出现和普及
移动电话的历史可以分为四个时期。第一个(前蜂窝)时期移动电话只能在特定地区使用特定的频率。电话经常占线,通话质量也不好。如果用户在某一地区使用了某一频率,那么其他人就无法使用这一频率。而且,美国FCC分配给移动电话使用的频率数量也比较少,限制了同时通话的数量。德国也建立了类似的系统,有名的如A-Netz 和B-Netz。蜂窝技术的使用极大的提高了移动电话频率的使用效率。相对于在一个很大的区域一个电话单独使用一个频率,蜂窝技术把一个移动电话的通信分成了几个小的区域或单元。不同的用户在不同的(不相邻)的单元可以使用同一频率进行通话而不会受到干扰。第一代蜂窝移动电话在不同的国家采用了相互不兼容的模拟技术。例如,20世纪80年代,美国采用了AMPS系统,英国采用了TACS, 德国采用的是C-Netz,而斯堪的纳维亚半岛国家使用了NMT系统。这就导致了彼此间的不兼容,尤其是在欧洲,尽管整个美国使用的是统一的AMPS系统。
二代移动电话则使用了数字技术。虽然改进了早期模拟技术的一些问题,但二代技术的使用在美国和欧洲还是有很大的不同。因为政府干预,欧洲采用了统一的技术标准。第一个二代系统GSM在20世纪80年代建立。可直到90年代,在ETSI的主持下,GSM才得以标准化。标准化GSM能完成国际漫游,自动定位服务,常见的加密和传输相对高品质音频的功能。GSM是目前在全球使用最广泛的第二代系统,应用于130多个国家,使用900兆赫的频率范围。同时,在美国却出现了一系列不兼容的二代标准。包括TDMA,(GSM的近亲)和CDMA,即Time and Code Division Multiple Access respectively.在一个单元内,如何分流电话从而有效使用频谱,这些系统采用了不同的技术。由于美国人没有采用统一的2G标准,所以哪个系统最好的争论一直存在,又加上手机漫游和升级的相关利益问题,因此,整个90年代 AMPS依然是美国通用移动技术的主流。移动电话最新的发展趋势是3G,其最大特点在于通信的高速率和数字服务。例如,3G电话收发电子邮件将会更加快捷,而且能从网上下载音乐和视频等内容。类似拍照手机等设备拍摄的图片也会传输的更快。
在 IMT-2000计划下,ITU正试图协调建立一个统一的国际3G标准。IMT-2000计划决定3G技术本应采用CDMA系统,可CDMA却有至少两个以上的竞争对手,因此最后折中,采用了一套办法,而不是单独的一个系统。在ITU2000年全球无线通信大会上,世界范围内IMT-2000系统使用的频率确定下来。尽管有很多公司都宣布了在接下来几年进军3G的计划,但只有日本在2002年开始了3G营运。
移动电话数量一直保持很好的增长势头。80年代几乎没有人知道移动电话是什么,可到了2002年,据估计,全世界平均每一百人就拥有15.57部移动电话。当然,国与国之间的普及水平有很大的不同。在美国,每一百人拥有44.2部移动电话,而法国是60.53部,德国68.29,芬兰77.84,英国78.28。因此,美国的电话拥有率是低于欧洲的一些富裕国家的。在欧洲和美国之外,澳大利亚的拥有率是57.75%,新西兰是62.13%,日本是58.76%。
毫无疑问,普及率与经济发展水平密切相关,所以意大利2002每一百人只拥有0.63部移动电话,肯尼亚为1.60部,中国11.17,马来西亚29.95。在一些国家移动电话的数量甚至超过了固定电话,例如,德国,法国,英国,希腊,意大利和比利时。但在美国,加拿大和阿根廷固定电话依然超过移动电话。2001年,日本两者的普及率相当。但整体来看,移动电话的普及速度远远高于固定电话。
不同国家间移动电话服务的资费难以比较。部分是因为汇率的变化,但主要是因为不同国家的价格套餐和定价形式不同。最明显的是,不同的国家有不同的收费制度,美国以外的国家主要采取打电话的一方付费。但在美国和加拿大,移动电话采用接电话乙方付费的计费方式。不同的套餐,绑定服务和通话资费也似的比较难于进行。90年代后期,最重要的变革之一就是移动电话收费采用了预付卡。
这种消费者在打电话之前就付费而不是在之后再付帐单的方式在很多国家取得了成功。例如,在瑞典,预付卡采用两年以后就占领了25%的市场分额。
尽管定价方式不同,但OECD估计,从92年到98年,平均个成员国移动电话的主要绑定服务成本下降了25%。.无线通信的经济问题
3.1 稀缺的频谱资源
无线电频谱是一种具有特殊用途的自然资源。就象上文提到的,它的性能不是均匀的,特定的部分最好用于特定用途。频谱也是有限的,因为只有部分电磁频谱能用于无线通信,尽管频率是否合适和传输系统的容量取决于技术。无线电频谱又是无限的,不是说今天使用的频段以后就不能用了,可它无法储存。在ITU的指导下,频谱按不同用途分配,然后又根据合适的用途分配给特定用户。
以前,用户分配都是靠政府法令。而且用户一般也是政府所有。80和90年代的私有化以及一些国家移动电话产业内竞争的成功促进了90年代频谱分配更加公平。从90年代早期开始,无线电频谱的用户,尤其是2G和3G移动电话频谱的用户一直都是由两种主要的方法之一选出来的,“选美比赛”或是拍卖。“选美比赛”需要用户向政府或政府任命的委员会提交商业计划。胜利者就在这些提交了计划的用户中产生。胜利者可能付政府一笔费用,尽管可能最想购买频谱的潜在用户不一定在胜利者中。例如,90年代英国就使用“选美比赛”的方法颁发了2G牌照。瑞典和西班牙也用这种方法颁发了3G牌照。法国用同样的方法颁发了四个3G牌照。国家电信的管理者要求公司必须于2001年一月低之前提交申请。这些申请将会被根据事先定好的标准评级,并打分,总分为500。这些标准包括就业率(25分)、服务(多达50分)以及部署速度(多达100分)胜出的公司必须支付政府设定的高额牌照使用费。最后,只有两家公司提出了申请。它们2001年六月拿到了牌照,剩下的两个无人问津。利用市场机制来配置频谱使用劝和预防电磁干扰的概念可以最少追述到50年代,先后由Herzel(1951)和Coase(1959)提出。
但当频谱拍卖普及的时候,已经过去三十多年了。新西兰1989年修改了法律允许拍卖频谱,到了90年代早期,与移动电话,电视,无线电广播以及其他私人无线服务有关的频谱分配也开始采用拍卖的方式。1993年八月分,美国修改法律允许FCC通过拍卖颁发无线频谱牌照,到96年七月分FCC已经举行了七次拍卖,颁发了2100张牌照,包括通过两次拍卖颁发的两张全美范围的2G牌照。2000年,英国拍卖了5张3G牌照,共计大约$34b。拍卖有一系列不同的形式,有新西兰的第二价格密封出价,美国的改性升序出价和混合升序出价以及英国的荷 兰式拍卖。
竞标者在参加拍卖时必须满足一定的标准,比如服务保障和参与保证金。在一定区域一个公司能够拍得的牌照数量是有限的,防止垄断。从经济学角度来说,频谱拍卖能够确保频谱在用户手中发挥最大价值。虽然频谱能拍卖到不同的用户手中,但依然要优先集中保证一些特殊用途对于一些频谱的获得权。
从市场角度来说,这可能会导致频谱使用的效率低下。一个特定波段的用户可能愿意出更高的价钱购买与其相邻的波段,而这个波段确已经被其他用户购得。但是一些波段的优先分配能够保证这些用户无法通过相互交易而获利。根据现有的牌照使用章程,即使是双赢,用户也不能随意更改频谱的用途作他用。
Coase(1959), Valletti(2001)提出了一个频谱所有劝相互贸易的系统,能利用市场在分配频谱用途的同时确定用户。电磁干扰问题也能通过相邻频谱所有者之间的产权分配和谈判解决。Valletti 说过无论是竞争问题还是法定标准问题都只能由频谱使用权的这个市场来解决。接下来我们讨论的是标准问题,竞争的问题将会在下面的第5部分讨论。Noam(1997)进一步发展了可交易频谱分配这一概念。
由于技术进步,信号可以在发送端打包成很多单独的数据信息而在接受端又可以重新聚合,这就意味着永久性的频谱分配在不久将来显的很多余。在此基础上,Noam提出特定频段分配可以通过市场完成。这样频谱用途的价格就能反映出其用途的实际价值。DeVany(1998)也涉及过基于市场的频谱分配政策,出了未来建立一个开放、商用、非绑定的频谱市场体系的可能。美国FCC的频谱拍卖一直是冲突不断。参与拍卖的1850-1910 MHz 和 1930-1990MHz波段已经有了私人固定点对点用户。FCC最后决定现有的用户有最多三年的时间与新的用户谈判可替代频谱和补偿问题。如果谈判失败,现有用户将被强制重新分配。
Cramton, Kwerel and Williams(1998)调查了一系列现有频谱重新分配的替代选择,发现美国重新分配的情况大致相当于讨价还价。虽然经济学家一般都提倡频谱分配要通过拍卖,可拍卖依然饱受批评。Binmore 和 Klemperer(2002)认为很多反对拍卖的观点都被误解了。但是Noam(1997)都Gruber(2001b)批评说频谱拍卖导致了垄断,竞争力不足。而Gruber认为技术进步提高了频谱使用的效率,增加了无线服务的竞争力。例如,从频谱使用效率来看,GSM移动电话大约是早期模拟系统的40到50倍。频谱拍卖是在频谱分配之后进行的。政府通常先分配固定的频段给一些适当服务。然后,再决定这一频段内拍卖的牌照的数量。所以拍卖的价格和无线服务的竞争力是由频谱数量和政府事先分配给这项服务的牌照数量共同决定的。虽然拍卖导致了对于稀缺频谱的竞争,但依然不能靠市场决定竞争的最好形式。Noam认为,为了克服人为的市场的不公平,分配体系应具有进入的灵活性。
3.2 频谱使用补偿作用
基于频谱的无线通信服务容易导致服务与服务之间以及地区和地区之间的协合作用。Binmore 和 Klemperer 就曾提到英国的3G频谱拍卖可能存在2G和3G移动通信基础设施的协合作用。在2G电信产业,相对于后来者,现有的电信企业拥有巨大的优势。不仅仅是现存企业2G生意对于3G的补偿,而且运营3G需要的建设基础的成本,现存企业也比后来者少的多,因为他们可以在现有2G的基础设施上进行建设。因此,在能否利用现存品牌向消费者提供新的产品和2G 以及 3G的服务之间存在协合作用。
地域间的协合作用能从FCC的2G 拍卖中很明显的看出来。Moreton 和 Spiller(1998)研究过美国1995-96年中的两次移动电话拍卖。为了获得一定牌照地区的人口数量,有竞争力和规范的环境以及任何协和作用的效果,他们对每个牌照采取了反向递减的拍卖形式。同样也有升序拍卖,这样最后的牌照成交价格就大致等于最后一位竟标者的出价。显然,相关协和作用是和最后一位的竟标者的网络有关的,在出价的基础上发挥这一网络的最大作用。为了获得地域协和作用的效果,Moreton 和 Spiller假设竟标者事实上拍得的网络与其期待的完全相同。
他们把地理协同作用分为本地和全球。本地协和考虑的是某地区的牌照价值与其相邻地区2G牌照所有者的关系。全球协和考虑的是最终意义上的胜出者拍得的全国性网络。Moreton 和 Spiller 发现了本地协和的存在。“在本地协和中,我们发现对于单独一个竟标者和几个竟标者来说,若干组两个或两个以上相邻的牌照显然对于前者更有价值。当在一个地区内增加想邻的牌照时,本地协调却急剧下降,而全球协调则微弱。在Moreton 和 Spiller的研究当中,现存蜂窝服务的本地覆盖能降低2G牌照的费用。这似乎和Binmore 和Klemperer关于不同移动电话服务之间的经济领域理论相冲突。Moreton 和Spiller认为负面关系可能反映竞争的减少。公司都不愿意投资更多与现存的模拟移动电话公司竞争,而是愿意把他们有限的资源使用在其他地方。可是,这一 理论有点牵强。
在升序拍卖中,竟标者都会根据自己的评估出价,如果说在现存的模拟电话服务和2G服务中存在积极的协和,那么无论其他竟标者出什么价格,最后,牌照的价值只能是中标者的出价。正如Moreton 和 Spiller的发现一样,2G牌照的价格与市场的人口数量和人口增长率成正比,与其服务的地区大小成反比。这些发现只是一些感性研究,与Ausubel的发现大体上一致。人口和需求好象总是积极相连,所以无论哪个层次的竞争,增加的人口总是会增加利润。但无论有多少人口,更大的区域只会增加建设2G蜂窝网络的成本,降低利润。Moreton 和Spiller的研究还表明,管理者对于现存模拟电话网络税收的管辖权总是会增加2G牌照的价格。这表明现有服务的税收政策削弱了竞争,整体上推高了服务价格。
管理者的通告和有关价格信息的出版物可能削弱竞争的现象,已经在其它产业中表现出来。
第二篇:土木工程毕业论文中英文翻译
外文翻译
班级:xxx 学号:xxx 姓名:xxx
一、外文原文:
Structural Systems to resist lateral loads Commonly Used structural Systems With loads measured in tens of thousands kips, there is little room in the design of high-rise buildings for excessively complex thoughts.Indeed, the better high-rise buildings carry the universal traits of simplicity of thought and clarity of expression.It does not follow that there is no room for grand thoughts.Indeed, it is with such grand thoughts that the new family of high-rise buildings has evolved.Perhaps more important, the new concepts of but a few years ago have become commonplace in today’ s technology.Omitting some concepts that are related strictly to the materials of construction, the most commonly used structural systems used in high-rise buildings can be categorized as follows: 1.Moment-resisting frames.2.Braced frames, including eccentrically braced frames.3.Shear walls, including steel plate shear walls.4.Tube-in-tube structures.5.Core-interactive structures.6.Cellular or bundled-tube systems.Particularly with the recent trend toward more complex forms, but in response also to the need for increased stiffness to resist the forces from wind and earthquake, most high-rise buildings have structural systems built up of combinations of frames, braced bents, shear walls, and related systems.Further, for the taller buildings, the majorities are composed of interactive elements in three-dimensional arrays.The method of combining these elements is the very essence of the design process for high-rise buildings.These combinations need evolve in response to environmental, functional, and cost considerations so as to provide efficient structures that provoke the architectural development to new heights.This is not to say that imaginative structural design can create great architecture.To the contrary, many examples of fine architecture have been created with only moderate support from the structural engineer, while only fine structure, not great architecture, can be developed
without the genius and the leadership of a talented architect.In any event, the best of both is needed to formulate a truly extraordinary design of a high-rise building.While comprehensive discussions of these seven systems are generally available in the literature, further discussion is warranted here.The essence of the design process is distributed throughout the discussion.Moment-Resisting Frames Perhaps the most commonly used system in low-to medium-rise buildings, the moment-resisting frame, is characterized by linear horizontal and vertical members connected essentially rigidly at their joints.Such frames are used as a stand-alone system or in combination with other systems so as to provide the needed resistance to horizontal loads.In the taller of high-rise buildings, the system is likely to be found inappropriate for a stand-alone system, this because of the difficulty in mobilizing sufficient stiffness under lateral forces.Analysis can be accomplished by STRESS, STRUDL, or a host of other appropriate computer programs;analysis by the so-called portal method of the cantilever method has no place in today’s technology.Because of the intrinsic flexibility of the column/girder intersection, and because preliminary designs should aim to highlight weaknesses of systems, it is not unusual to use center-to-center dimensions for the frame in the preliminary analysis.Of course, in the latter phases of design, a realistic appraisal in-joint deformation is essential.Braced Frames The braced frame, intrinsically stiffer than the moment –resisting frame, finds also greater application to higher-rise buildings.The system is characterized by linear horizontal, vertical, and diagonal members, connected simply or rigidly at their joints.It is used commonly in conjunction with other systems for taller buildings and as a stand-alone system in low-to medium-rise buildings.While the use of structural steel in braced frames is common, concrete frames are more likely to be of the larger-scale variety.Of special interest in areas of high seismicity is the use of the eccentric braced frame.Again, analysis can be by STRESS, STRUDL, or any one of a series of two –or three dimensional analysis computer programs.And again, center-to-center dimensions are used commonly in the preliminary analysis.Shear walls The shear wall is yet another step forward along a progression of ever-stiffer structural systems.The system is characterized by relatively thin, generally(but not always)concrete elements that provide both structural strength and separation between building functions.In high-rise buildings, shear wall systems tend to have a relatively high aspect ratio, that is, their height tends to be large compared to their width.Lacking tension in the foundation system, any structural element is limited in its ability to resist overturning moment by the width of the system and by the gravity load supported by the element.Limited to a narrow overturning, One obvious use of the system, which does have the needed width, is in the exterior walls of building, where the requirement for windows is kept small.Structural steel shear walls, generally stiffened against buckling by a concrete overlay, have found application where shear loads are high.The system, intrinsically more economical than steel bracing, is particularly effective in carrying shear loads down through the taller floors in the areas immediately above grade.The system has the further advantage of having high ductility a feature of particular importance in areas of high seismicity.The analysis of shear wall systems is made complex because of the inevitable presence of large openings through these walls.Preliminary analysis can be by truss-analogy, by the finite element method, or by making use of a proprietary computer program designed to consider the interaction, or coupling, of shear walls.Framed or Braced Tubes The concept of the framed or braced or braced tube erupted into the technology with the IBM Building in Pittsburgh, but was followed immediately with the twin 110-story towers of the World Trade Center, New York and a number of other buildings.The system is characterized by three –dimensional frames, braced frames, or shear walls, forming a closed surface more or less cylindrical in nature, but of nearly any plan configuration.Because those columns that resist
lateral forces are placed as far as possible from the cancroids of the system, the overall moment of inertia is increased and stiffness is very high.The analysis of tubular structures is done using three-dimensional concepts, or by two-dimensional analogy, where possible, whichever method is used, it must be capable of accounting for the effects of shear lag.The presence of shear lag, detected first in aircraft structures, is a serious limitation in the stiffness of framed tubes.The concept has limited recent applications of framed tubes to the shear of 60 stories.Designers have developed various techniques for reducing the effects of shear lag, most noticeably the use of belt trusses.This system finds application in buildings perhaps 40stories and higher.However, except for possible aesthetic considerations, belt trusses interfere with nearly every building function associated with the outside wall;the trusses are placed often at mechanical floors, mush to the disapproval of the designers of the mechanical systems.Nevertheless, as a cost-effective structural system, the belt truss works well and will likely find continued approval from designers.Numerous studies have sought to optimize the location of these trusses, with the optimum location very dependent on the number of trusses provided.Experience would indicate, however, that the location of these trusses is provided by the optimization of mechanical systems and by aesthetic considerations, as the economics of the structural system is not highly sensitive to belt truss location.Tube-in-Tube Structures The tubular framing system mobilizes every column in the exterior wall in resisting over-turning and shearing forces.The term‘tube-in-tube’is largely self-explanatory in that a second ring of columns, the ring surrounding the central service core of the building, is used as an inner framed or braced tube.The purpose of the second tube is to increase resistance to over turning and to increase lateral stiffness.The tubes need not be of the same character;that is, one tube could be framed, while the other could be braced.In considering this system, is important to understand clearly the difference between the shear and the flexural components of deflection, the terms being taken from beam analogy.In a framed tube, the shear component of deflection is associated with the bending deformation of columns and girders(i.e, the webs of the framed tube)while the flexural component is associated with the axial shortening and lengthening of columns(i.e, the flanges of the framed tube).In a
braced tube, the shear component of deflection is associated with the axial deformation of diagonals while the flexural component of deflection is associated with the axial shortening and lengthening of columns.Following beam analogy, if plane surfaces remain plane(i.e, the floor slabs),then axial stresses in the columns of the outer tube, being farther form the neutral axis, will be substantially larger than the axial stresses in the inner tube.However, in the tube-in-tube design, when optimized, the axial stresses in the inner ring of columns may be as high, or even higher, than the axial stresses in the outer ring.This seeming anomaly is associated with differences in the shearing component of stiffness between the two systems.This is easiest to under-stand where the inner tube is conceived as a braced(i.e, shear-stiff)tube while the outer tube is conceived as a framed(i.e, shear-flexible)tube.Core Interactive Structures Core interactive structures are a special case of a tube-in-tube wherein the two tubes are coupled together with some form of three-dimensional space frame.Indeed, the system is used often wherein the shear stiffness of the outer tube is zero.The United States Steel Building, Pittsburgh, illustrates the system very well.Here, the inner tube is a braced frame, the outer tube has no shear stiffness, and the two systems are coupled if they were considered as systems passing in a straight line from the “hat” structure.Note that the exterior columns would be improperly modeled if they were considered as systems passing in a straight line from the “hat” to the foundations;these columns are perhaps 15% stiffer as they follow the elastic curve of the braced core.Note also that the axial forces associated with the lateral forces in the inner columns change from tension to compression over the height of the tube, with the inflection point at about
5/8 of the height of the tube.The outer columns, of course, carry the same axial force under lateral load for the full height of the columns because the columns because the shear stiffness of the system is close to zero.The space structures of outrigger girders or trusses, that connect the inner tube to the outer tube, are located often at several levels in the building.The AT&T headquarters is an example of an astonishing array of interactive elements: 1.The structural system is 94 ft(28.6m)wide, 196ft(59.7m)long, and 601ft(183.3m)high.2.Two inner tubes are provided, each 31ft(9.4m)by 40 ft(12.2m), centered 90 ft(27.4m)apart in the long direction of the building.3.The inner tubes are braced in the short direction, but with zero shear stiffness in the long direction.4.A single outer tube is supplied, which encircles the building perimeter.5.The outer tube is a moment-resisting frame, but with zero shear stiffness for the center50ft(15.2m)of each of the long sides.6.A space-truss hat structure is provided at the top of the building.7.A similar space truss is located near the bottom of the building 8.The entire assembly is laterally supported at the base on twin steel-plate tubes, because the shear stiffness of the outer tube goes to zero at the base of the building.Cellular structures A classic example of a cellular structure is the Sears Tower, Chicago, a bundled tube structure of nine separate tubes.While the Sears Tower contains nine nearly identical tubes, the basic structural system has special application for buildings of irregular shape, as the several tubes need not be similar in plan shape, It is not uncommon that some of the individual tubes one of the strengths and one of the weaknesses of the system.This special weakness of this system, particularly in framed tubes, has to do with the concept of differential column shortening.The shortening of a column under load is given by the expression
△=ΣfL/E For buildings of 12 ft(3.66m)floor-to-floor distances and an average compressive stress of 15 ksi(138MPa), the shortening of a column under load is 15(12)(12)/29,000 or 0.074in(1.9mm)per story.At 50 stories, the column will have shortened to 3.7 in.(94mm)less than its unstressed length.Where one cell of a bundled tube system is, say, 50stories high and an adjacent cell is, say, 100stories high, those columns near the boundary between.the two systems need to have this differential deflection reconciled.Major structural work has been found to be needed at such locations.In at least one building, the Rialto Project, Melbourne, the structural engineer found it necessary to vertically pre-stress
the lower height columns so as to reconcile the differential deflections of columns in close proximity with the post-tensioning of the shorter column simulating the weight to be added on to adjacent, higher columns.二、原文翻译:
抗侧向荷载的结构体系
常用的结构体系
若已测出荷载量达数千万磅重,那么在高层建筑设计中就没有多少可以进行极其复杂的构思余地了。确实,较好的高层建筑普遍具有构思简单、表现明晰的特点。
这并不是说没有进行宏观构思的余地。实际上,正是因为有了这种宏观的构思,新奇的高层建筑体系才得以发展,可能更重要的是:几年以前才出现的一些新概念在今天的技术中已经变得平常了。
如果忽略一些与建筑材料密切相关的概念不谈,高层建筑里最为常用的结构体系便可分为如下几类:
1. 抗弯矩框架。
2. 支撑框架,包括偏心支撑框架。3. 剪力墙,包括钢板剪力墙。4. 筒中框架。5. 筒中筒结构。6. 核心交互结构。
7. 框格体系或束筒体系。
特别是由于最近趋向于更复杂的建筑形式,同时也需要增加刚度以抵抗几力和地震力,大多数高层建筑都具有由框架、支撑构架、剪力墙和相关体系相结合而构成的体系。而且,就较高的建筑物而言,大多数都是由交互式构件组成三维陈列。
将这些构件结合起来的方法正是高层建筑设计方法的本质。其结合方式需要在考虑环境、功能和费用后再发展,以便提供促使建筑发展达到新高度的有效结构。这并不是说富于想象力的结构设计就能够创造出伟大建筑。正相反,有许多例优美的建筑仅得到结构工程师适当的支持就被创造出来了,然而,如果没有天赋甚厚的建筑师的创造力的指导,那么,得以发展的就只能是好的结构,并非是伟大的建筑。无论如何,要想创造出高层建筑真正非凡的设计,两者都需要最好的。
虽然在文献中通常可以见到有关这七种体系的全面性讨论,但是在这里还值得进一步讨论。设计方法的本质贯穿于整个讨论。设计方法的本质贯穿于整个讨论中。
抗弯矩框架
抗弯矩框架也许是低,中高度的建筑中常用的体系,它具有线性水平构件和垂直构件在接头处基本刚接之特点。这种框架用作独立的体系,或者和其他体系结合起来使用,以便提供所需要水平荷载抵抗力。对于较高的高层建筑,可能会发现该本系不宜作为独立体系,这是因为在侧向力的作用下难以调动足够的刚度。
我们可以利用STRESS,STRUDL 或者其他大量合适的计算机程序进行结构分析。所谓的门架法分析或悬臂法分析在当今的技术中无一席之地,由于柱梁节点固有柔性,并且由于初步设计应该力求突出体系的弱点,所以在初析中使用框架的中心距尺寸设计是司空惯的。当然,在设计的后期阶段,实际地评价结点的变形很有必要。
支撑框架
支撑框架实际上刚度比抗弯矩框架强,在高层建筑中也得到更广泛的应用。这种体系以其结点处铰接或则接的线性水平构件、垂直构件和斜撑构件而具特色,它通常与其他体系共同用于较高的建筑,并且作为一种独立的体系用在低、中高度的建筑中。
尤其引人关注的是,在强震区使用偏心支撑框架。
此外,可以利用STRESS,STRUDL,或一系列二维或三维计算机分析程序中的任何一种进行结构分析。另外,初步分析中常用中心距尺寸。
剪力墙
剪力墙在加强结构体系刚性的发展过程中又前进了一步。该体系的特点是具有相当薄的,通常是(而不总是)混凝土的构件,这种构件既可提供结构强度,又可提供建筑物功能上的分隔。
在高层建筑中,剪力墙体系趋向于具有相对大的高宽经,即与宽度相比,其高度偏大。由于基础体系缺少应力,任何一种结构构件抗倾覆弯矩的能力都受到体系的宽度和构件承受的重力荷载的限制。由于剪力墙宽度狭狭窄受限,所以需要以某种方式加以扩大,以便提从所需的抗倾覆能力。在窗户需要量小的建筑物外墙中明显地使用了这种确有所需要宽度的体系。
钢结构剪力墙通常由混凝土覆盖层来加强以抵抗失稳,这在剪切荷载大的地方已得到应用。这种体系实际上比钢支撑经济,对于使剪切荷载由位于地面正上方区域内比较高的楼层向下移特别有效。这种体系还具有高延性之优点,这种特性在强震区特别重要。
由于这些墙内必然出同一些大孔,使得剪力墙体系分析变得错综复杂。可以通过桁架模似法、有限元法,或者通过利用为考虑剪力墙的交互作用或扭转功能设计的专门计处机程序进行初步分析
框架或支撑式筒体结构:
框架或支撑式筒体最先应用于IBM公司在Pittsburgh的一幢办公楼,随后立即被应用于纽约双子座的110层世界贸易中心摩天大楼和其他的建筑中。这种系统有以下几个显著的特征:三维结构、支撑式结构、或由剪力墙形成的一个性质上差不多是圆柱体的闭合曲面,但又有任意的平面构成。由于这些抵抗侧向荷载的柱子差不多都被设置在整个系统的中心,所以整体的惯性得到提高,刚度也是很大的。
在可能的情况下,通过三维概念的应用、二维的类比,我们可以进行筒体结构的分析。不管应用那种方法,都必须考虑剪力滞后的影响。
这种最先在航天器结构中研究的剪力滞后出现后,对筒体结构的刚度是一个很大的限制。这种观念已经影响了筒体结构在60层以上建筑中的应用。设计者已经开发出了很多的技术,用以减小剪力滞后的影响,这其中最有名的是桁架的应用。框架或支撑式筒体在40层或稍高的建筑中找到了自己的用武之地。除了一些美观的考虑外,桁架几乎很少涉及与外
墙联系的每个建筑功能,而悬索一般设置在机械的地板上,这就令机械体系设计师们很不赞成。但是,作为一个性价比较好的结构体系,桁架能充分发挥它的性能,所以它会得到设计师们持续的支持。由于其最佳位置正取决于所提供的桁架的数量,因此很多研究已经试图完善这些构件的位置。实验表明:由于这种结构体系的经济性并不十分受桁架位置的影响,所以这些桁架的位置主要取决于机械系统的完善,审美的要求,筒中筒结构:
筒体结构系统能使外墙中的柱具有灵活性,用以抵抗颠覆和剪切力。“筒中筒”这个名字顾名思义就是在建筑物的核心承重部分又被包围了第二层的一系列柱子,它们被当作是框架和支撑筒来使用。配置第二层柱的目的是增强抗颠覆能力和增大侧移刚度。这些筒体不是同样的功能,也就是说,有些筒体是结构的,而有些筒体是用来支撑的。
在考虑这种筒体时,清楚的认识和区别变形的剪切和弯曲分量是很重要的,这源于对梁的对比分析。在结构筒中,剪切构件的偏角和柱、纵梁(例如:结构筒中的网等)的弯曲有关,同时,弯曲构件的偏角取决于柱子的轴心压缩和延伸(例如:结构筒的边缘等)。在支撑筒中,剪切构件的偏角和对角线的轴心变形有关,而弯曲构件的偏角则与柱子的轴心压缩和延伸有关。
根据梁的对比分析,如果平面保持原形(例如:厚楼板),那么外层筒中柱的轴心压力就会与中心筒柱的轴心压力相差甚远,而且稳定的大于中心筒。但是在筒中筒结构的设计中,当发展到极限时,内部轴心压力会很高的,甚至远远大于外部的柱子。这种反常的现象是由于两种体系中的剪切构件的刚度不同。这很容易去理解,内筒可以看成是一个支撑(或者说是剪切刚性的)筒,而外筒可以看成是一个结构(或者说是剪切弹性的)筒。
核心交互式结构:
核心交互式结构属于两个筒与某些形式的三维空间框架相配合的筒中筒特殊情况。事实上,这种体系常用于那种外筒剪切刚度为零的结构。位于Pittsburgh的美国钢铁大楼证实了这种体系是能很好的工作的。在核心交互式结构中,内筒是一个支撑结构,外筒没有任何剪切刚度,而且两种结构体系能通过一个空间结构或“帽”式结构共同起作用。需要指出的是,如果把外部的柱子看成是一种从“帽”到基础的直线体系,这将是不合适的;根据支撑核心的弹性曲线,这些柱子只发挥了刚度的15%。同样需要指出的是,内柱中与侧向力有关的轴向力沿筒高度由拉力变为压力,同时变化点位于筒高度的约5/8处。当然,外柱也传
递相同的轴向力,这种轴向力低于作用在整个柱子高度的侧向荷载,因为这个体系的剪切刚度接近于零。
把内外筒相连接的空间结构、悬臂梁或桁架经常遵照一些规范来布置。美国电话电报总局就是一个布置交互式构件的生动例子。
1、结构体系长59.7米,宽28.6米,高183.3米。
2、布置了两个筒,每个筒的尺寸是9.4米×12.2米,在长方向上有27.4米的间隔。
3、在短方向上内筒被支撑起来,但是在长方向上没有剪切刚度。
4、环绕着建筑物布置了一个外筒。
5、外筒是一个瞬时抵抗结构,但是在每个长方向的中心15.2米都没有剪切刚度。
6、在建筑的顶部布置了一个空间桁架构成的“帽式”结构。
7、在建筑的底部布置了一个相似的空间桁架结构。
8、由于外筒的剪切刚度在建筑的底部接近零,整个建筑基本上由两个钢板筒来支持。
框格体系或束筒体系结构:
位于美国芝加哥的西尔斯大厦是箱式结构的经典之作,它由九个相互独立的筒组成的一个集中筒。由于西尔斯大厦包括九个几乎垂直的筒,而且筒在平面上无须相似,基本的结构体系在不规则形状的建筑中得到特别的应用。一些单个的筒高于建筑一点或很多是很常见的。事实上,这种体系的重要特征就在于它既有坚固的一面,也有脆弱的一面。
这种体系的脆弱,特别是在结构筒中,与柱子的压缩变形有很大的关系,柱子的压缩变形有下式计算:
△=ΣfL/E 对于那些层高为3.66米左右和平均压力为138MPa的建筑,在荷载作用下每层柱子的压缩变形为15(12)/29000或1.9毫米。在第50层柱子会压缩94毫米,小于它未受压的长度。这些柱子在50层的时候和100层的时候的变形是不一样的,位于这两种体系之间接近于边缘的那些柱需要使这种不均匀的变形得以调解。
主要的结构工作都集中在布置中。在Melbourne的Rialto项目中,结构工程师发现至少有一幢建筑,很有必要垂直预压低高度的柱子,以便使柱不均匀的变形差得以调解,调解的方法近似于后拉伸法,即较短的柱转移重量到较高的邻柱上。
第三篇:行政管理专业毕业论文中英文翻译
新公共管理的现状
欧文·E·休斯
(澳大利亚莫纳什大学管理系)
毫无疑问,世界上许多国家,无论是发达国家还是发展中国家,在20世纪80年代后期和90年代初期都开始了一场持续的公共部门管理变革运动。这场改革运动至今仍在很多方面继续对政府的组织和管理产生着影响。人们对于这些改革的看法众说纷纭,莫衷一是。批评家尤其是英国和美国的批评家们认为,新模式存在着各种各样的问题,而且也不具有国际普遍性的改革意义,公共管理不可能被称为范式。批评几乎涵盖了变化的各个方面。大多数批评都属于学术上的吹毛求疵。不同的思想流派讨论着细枝末节;学术期刊上的文章也越来越抽象,远离现实。同时,公共管理者在实践中不断推动和实施着这项变化和改革。正如我在其他文章中所认为的那样,在大多数国家,传统的公共行政模式已经为公共管理模式所取代。公共部门的变革回应了几个相互联系的重大现实问题,包括:职能公共部门提供公共服务的低效率;经济理论的变化;私营部门相关变化产生的影响,尤其是全球化作为一种经济力量的兴起;技术变化使得分权同时又能更好地控制全局成为可能。行政管理可以分为三个鲜明的发展阶段:前传统阶段、公共行政传统模式阶段和公共管理改革阶段。每个阶段都有自己的管理模式。从上一个阶段过渡到下一个阶段并非轻而易举,从传统的公共行政到公共管理的过渡至今尚未完成。但这只是时间的问题。因为新模式背后的理论基础非常强大。这场变革运动以“新公共管理”著称,尽管这个名称引起了争论,然而它不但在蓬勃发展着,而且是对大多数发达国家已经采取的管理模式的最佳表述。传统的行政模式相对于它所处的时代是一项伟大的改革,但是,那个时代已经过去了。
一、前传统模式
很显然,在19世纪末官僚体制理论尚未健全之前,已经存在着某种形式的行政管理。公共行政已经有很长的历史了,它与政府这一概念以及文明的兴起一样历史悠久。正如格拉登(Glad2den)指出的那样,行政的某种模式自从政府出现之后就一直存在着。首先是创始者或领导者赋予社会以可能,然后是组织者或行政者使之永恒。行政或事务管理是所有社会活动中的中间因素,虽然不是光彩夺目,但对社会的持续发展却是至关重要的。公认的行政体制在古埃及就已经存在了,其管辖范围从每年的尼罗河泛滥引起的灌溉事务到金字塔的建造。中国在汉朝就采用了儒家规范,认为政府应当是民选的,不是根据出身,而是根据品德和能力,政府的主要目标是谋取人民的福利。在欧洲,各种帝国——希腊、罗马、神圣罗马、西班牙等首先是行政帝国,它们由中央通过各种规则和程序进行管理。韦伯认为,中世纪“现代”国家的发展同时伴随着“官僚治理结构的发展”。尽管这些国家以不同的方式进行管理,但它们具有共同的特点,这可以称为前现代。也就是说,早期的行政体制本质上是人格化的,或者说是建立在韦伯所说的“裙带关系”的基础上,也就是说以效忠国王或大臣等某个特定的人为基础,而不是非人格化的;以效忠组织或国家为基础而不是以个人为基础。尽管存在着这么一种观点,即认为行政管理本身不为人赞许的特点仅仅来自于传统模式,但早期的做法常常导致谋求个人利益的贪污行为或滥用职权。在早期行政体制下,我们现在看来觉得很奇怪的做法曾是当时执政政府职能的普遍行为。那些一心走仕途的人往往依靠朋友或亲戚获取工作或买官,这就是说先以钱来收买海关官员或税收官员,然后再向客户伸手要钱,从而既回收了最初的买官投资成本,又可以大赚一笔。美国19世纪的“政党分肥制度”意味着在执政党发生了变化的同时,政府中的所有行政职位也发生了变化。前现代官僚体制是“个人的、传统的、扩散的、同类的和特殊的”,而按照韦伯的论证,现代官僚体制应当是“非人格化的、理性的、具体的、成就取向的和普遍的”。个人化政府往往是低效率的:裙带关系意味着无能的而不是能干的人被安排到领导岗位上;政党分肥制常常导致腐败,此外还存在着严重的低效率。传统行政模式的巨大成功使得早期做法看起来很奇怪。专业化、非政治化行政在我们看来是如此顺理成章,以至难以想象到会有别的制度存在。西方的行政制度即使简单到通过考试选拔官员的想法,也是直到1854年英国的诺思科特—屈维廉报告出台后才开始建立,尽管这种制度在中国早已通行很久了。
二、传统的公共行政模式
在19世纪末期,另外一种模式开始在全世界流行,这就是所谓的传统行政模式。它的主要理论基础来源于几个国家的学者,即,美国的伍德罗·威尔逊和德国的马克斯·韦伯,人们把他们和官僚制模式相联系;弗雷德里克·泰勒系统地阐述了科学管理理论,该理论也来源于对美国私营部门的运用,为公共行政提供了方 法。与其他理论家不同,泰勒没有着力关注公共部门,可是他的理论却在该领域具有广泛影响。这三位理论家是传统公共行政模式的主要影响者。对于其他国家来说,还要加上诺思科特和屈维廉,他们对美国之外的国家的行政尤其是威尔逊的行政体制产生了重要影响。在19世纪中期,诺思科特和屈维廉最先提出了通过考试和品德来任命官员的主张,并提出了无偏见和行政中立的观点。传统的行政模式有以下几个主要特点: 1.官僚制。政府应当根据等级、官僚原则进行组织。德国社会学家马克斯·韦伯对官僚制度有一个经典的、清晰的分析。虽然这种官僚制思想在商业组织和其他组织中采用过,但它在公共部门得到了更好和更长久的执行。
2.最好的工作方式和程序都在详尽全面的手册中加以规定,以供行政人员遵循。严格地遵守这些原则将会为组织运行提供最好的方式。
3.官僚服务。一旦政府涉足政策领域,它将成为通过官僚体制提供公共产品和服务的提供者。
4.在政治、行政二者的关系中,行政管理者一般认为政治与行政事务是可以分开的。行政就是贯彻执行指令,而任何事关政策或战略事务的决定都应当由政治领导者做出,这可以确保民主责任制。
5.公共利益被假定为公务员个人的惟一动机,为公众服务是无私的付出。6.职业化官僚制。公共行政被看作是一种特殊活动,因而要求公务员保持中立、默默无闻、终身雇用以及平等地服务于任何一个政治领导人。
7.行政任务的书面含义是执行他人的指令而不承担由此而致的个人责任。通过对比早期的行政模式,我们可以更好地理解韦伯体系的主要优点和不同点。韦伯制度和它以前的各种模式最重要的区别在于:以规则为基础的非人格化的制度取代了人格化的行政管理制度。一个组织及其规则要比组织中的任何个人都重要。官僚制度就其运作以及如何对客户做出反应方面必须是非人格化的。正如韦伯所论证的那样“:将现代公职管理归并为各种规定深深地触及了它的本质。现代公共行政理论认为,以法令形式来命令执行某些事务的权威已被合法地授予了公共机关。这并没有授予某机构具体情况下通过指令实施某种事务的权力。它只能是抽象地管制某种事务。与此形成鲜明对比的是,通过个人特权和赋予特许权管制所有事务。后者的情况完全是由世袭制支配的,至少就这些事务不是不可 3 被侵犯的传统而言是这种情况。”
这一点非常重要。早期的行政管理以人际关系为基础,个人要忠诚于亲戚、保护人、领导人或政党,而不是对体制负责。有时,早期行政管理在政治上是比较敏感的,因为行政机关的人员是任命的,他们更是政客的臂膀或主流阶级。但是,它也常常是专制的,专制的行政可能是不公平的,特别是对那些不能够或者不愿意投入个人政治游戏的人来说更是如此。一个以韦伯原则为基础的非人格化的制度可以完全消除专制——至少在理想情况下是如此。档案的存在、前例原则的参照和法律依据意味着在相同的环境中总是会做出相同的决策。在这种情况下不仅效率更高,而且公民和官僚等级制中的人员都知道自己所处的立场。
其他的差别均与此相联系。在各种规定和非人格化的基础上,会很自然地形成严格的等级制度。等级制度及其规定在个人离开组织后保持不变。虽然韦伯强调的是整个制度,但他也注意到了官僚制组织中的个人任期和条件。
传统行政模式获得了极大的成功,它为全世界各国政府所广泛采用。无论是从理论上还是从实践上看,它都显示出了优势。与以前腐败盛行的制度相比,它更具效率,而且职业化公务员的思想对个人以及业余服务都是一个巨大的进步。然而,该模式现在也暴露出了问题,这些问题表明该模式即使不能说已经过时了,也可以说即将过时。
公共行政的理论支柱已经难以描述政府现实了。政治控制理论已经问题重重。行政意味着遵从他人的指令,因此要求有一个秩序井然的收发方法。指令的发出者与实施者之间有一个明确的划分。但是这并不现实,并且随着公共服务规模和领域的扩大而愈加不可能。传统模式的另一理论支柱——官僚制理论也不再被认为是组织的特别有效形式。正式的官僚体制可能有它的优势,但人们也认为它往往培养墨守成规者而不是创新者;鼓励行政人员规避风险而不是勇于冒险,鼓励他们浪费稀缺资源而不是有效利用。韦伯曾把官僚制看成是“理想类型”,但现在这种理想类型却培养了惰性、丧失进取心、导致平庸和低效率,这些被认为是公共部门的特有病。它也由此遭受批评。实际上“,官僚”这个词在今天更多地被看成是低效率的同义词。
三、新公共管理模式
20世纪八九十年代,在公共部门出现了一种针对传统行政模式的缺陷的新 管理方法。这种方法可以缓解传统模式的某些问题,同时也意味着公共部门运转方面发生了引人注目的变化。这种新的管理方法有很多名称“:管理主义”、“新公共管理”“、以市场为基础的公共行政”“、后官僚制典范”或“企业型政府”。到90年代后期,人们越来越倾向于使用“新公共管理”的概念。
尽管新公共管理的名称众多,但对于公共部门管理发生的实际变化而言,人们还是有一种共识。第一,无论这种模式叫什么,它都代表着一种与传统公共行政不同的重大变化,它更为关注结果的实现和管理者的个人责任。第二,它明确表示要摆脱古典官僚制,从而使组织、人事、任期和条件更加灵活。第三,它明确规定了组织和人事目标,这就可以根据绩效指标测量工作任务的完成情况。同样,还可以对计划方案进行更为系统的评估,也可以比以前更为严格地确定政府计划是否实现了其预定目标。第四,高级行政管理人员更有可能带有政治色彩地致力于政府工作,而不是无党派或中立的。第五,政府更有可能受到市场的检验,将公共服务的购买者与提供者区分开,即将“掌舵者与划桨者区分开”。政府介入并不一定总是指政府通过官僚手段行事。第六,出现了通过民营化和市场检验、签订合同等方式减少政府职能的趋势。在某种情况下,这是根本性的。一旦发生了从过程向结果转化的重要变革,所有与此相连的连续性步骤就都是必要的。
霍姆斯(Holmes)和尚德(Shand)对这次改革的特点作了一个特别有用的概括。他们把新公共管理视作范式,这种好的管理方法具有以下特点:(1)这是一种更加富有战略性或结构导向型的决策方法(强调效率、结果和服务质量)。(2)分权式管理环境取代了高度集中的等级组织结构。这使资源分配和服务派送更加接近供应本身,由此可以得到更多相关的信息和来自客户及其他利益团体的反馈。(3)可以更为灵活地探索代替直接供应公共产品的方法,从而提供成本节约的政策结果。(4)关注权威与责任的对应,以此作为提高绩效的关键环节,这包括强调明确的绩效合同的机制。(5)在公共部门之间和内部创造一个竞争性的环境。(6)加强中央战略决策能力,使其能够迅速、灵活和低成本地驾驭政府对外部变化和多元利益做出反应。(7)通过要求提供有关结果和全面成本的报告来提高责任度和透明度。(8)宽泛的服务预算和管理制度支持和鼓励着这些变化的发生。
新公共管理并没有认为实现某结果有一条最好的途径。管理者在被赋予责任之前并没有被告知如何获得结果。决定工作方式是管理者的一个职责,如果没有 5 实现预定的目标,管理者对此应当承担责任。
四、结论
政府管理在过去的一百五十年里经历了三种模式。首先是人格化或前现代行政模式,当该模式日益暴露其缺陷以及出于提高效率的目的,它就被第二种模式即传统的官僚行政模式所取代。同样,当传统行政模式问题重重时,它就为第三种模式即新公共管理取代,从政府转向替代性市场。20世纪80年代以来,市场的主导地位就如同20世纪20年代到60年代官僚制度居主导地位一样。在任何一种政府制度中,官僚和市场都是共存的,只是在某个阶段一种形式占主导地位,而在另一阶段,另一种形式占主导地位。新公共管理时代是官僚制日益削弱而市场在公共行政领域占据统治地位的时期。
在现实中,市场和官僚体制相互需要,相互补充。新公共管理不可能完全代替官僚制,正如1989年以前的东欧,官僚制不可能代替市场一样。但新公共管理运动表明的是,早期传统官僚制的许多功能都可以而且现在经常由市场来执行。在一个官僚制作为组织原则日益削弱的环境下,市场解决方案就会被推出。当然不是所有的市场药方都能成功,但这不是问题的核心。政府将从新公共管理这一工具箱中探寻到解决方案。如果这些方案行之无效,政府就会从同一来源中寻找其他方案。政府管理背后的理论基础已经发生了变革,我们完全可以用“范式”这一术语来描述它。在公共行政学术界,有许多对新公共管理持否定态度的批评家。但是他们的批评对迅速开展的政府改革影响很小。在新公共管理模式之后,会出现另一种新的模式,但肯定不会回到传统的行政模式。The New Public Management Situation
Owen E.Hughes Monash University Management(Australia)
No doubt, many countries in the world, and both developed countries and developing countries, in the late 1980s and early 1990s began a continuous public sector management reform movement.The reform movement is still in many aspects government continue to the organization and management of the influence.People in these reforms view repudiating them.Critics especially in Britain and the United States, critics say the new mode of various problems exist, but also does not have the international prevailing reform of public management, could not be called paradigm.Criticism from almost every aspect of the change.Most of the academic criticism belong to the mouth.Different schools of thought in detail discussion, The academic journal articles and abstraction, from reality.At the same time, in the practice of public management and implementation of the reform and the change.As I in other articles in the thought, in most countries, the traditional public administrative mode for public management mode has been replaced.The reform of public department responded to the realities of several interrelated problems, including: the function of public sector provide public services of low efficiency, Economic theory of change, Private sector related changes impact of globalization, especially as a kind of economic power, Technology changes made decentralization and better control globally becomes possible.The administrative management can be divided into three stages: the development of distinct phases, and public administration before traditional pattern and public management reform stage.Each stage has its own management mode.From a stage of transition to the next stage is not easy, from the traditional public administration to public administration has not yet completed the transition.But it was only a matter of time.Because the new mode of theoretical basis is very strong.The new public management movement “, ”although this name, but it is not only a debate in the booming, and in most developed countries have taken the best management mode of expression.The traditional administrative mode than it's age is a great reform, but that time has passed.A traditional pattern Obviously, in the late 19th century bureaucracy system theory, not sound already exists some form of administrative management.Public administration has a long history, and it is the concept of a government and the rise of civilization as history.As the case Glad2den Osama bin laden(point), a model of administrative since the government appears has existed.First is endowed with founder or leader, then is the social or administrative person to organizers of eternity.Administration management or business is all in social activities, although not among factors, but the glow of social sustainable development is of vital importance.Recognized administrative system in ancient Egypt is already exists, its jurisdiction from the Nile flooding caused by the year to build the pyramids irrigation affairs.China is adopted in the han dynasty, Confucian norms that government should be elected, not according to the background, but according to the character and ability, the government's main goal is to seek the welfare of the people.In Europe, various empirebegan to establish in China, although the system has long passage.The traditional public administrative pattern In the late 19th century, additionally one kind of pattern on the world popular, this is the so-called traditional administrative pattern.Its main theoretical basis from several countries, namely, the American scholars and Germany Woodrow Wilson of Max Weber's, people put their associated with bureaucracy model, Frederick Tyler systematically elaborated the scientific management theory, the theory of the private sector from America, for public administration method was provided.And the other theorists, Taylor without focusing on public sector, but his theory was influential in this field.The three traditional public administration mode is theorist of main effect.In other countries, plus G..M.Trevelyan and North America, the state administration of administrative system, especially the Wilson has produced important influence.In the 19th century, the north G..M.Trevelyan and put forward through the examination and character, and appointed officials put forward bias and administrative neutral point of view.The traditional administrative pattern has the following features: 1.The bureaucracy.The government shall, according to the principle of bureaucratic rank and organization.The German sociologist Max Weber bureaucracy system of a classic, and analysis.Although the bureaucracy in business organizations and other tissues, but it is in the public sector got better and longer.2.The best way of working and procedures are in full manual detail codes, for administrative personnel to follow.Strictly abide by these principles will run for the organization provides the best way.3.Bureaucratic service.Once the government policy areas in, it will be through the bureaucracy to provide public products and service providers.4.In political and administrative two relations, political and administrative managers generally think of administrative affairs can be separated.Administration is the implement instruction, and any matter policy or strategic affairs shall be decided by the political leaders, which can ensure that the democratic system.5.Public interests are assumed to individual civil servants, the only motive for public service is selfless paying.6.Professional bureaucracy.Public administration is viewed as a kind of special activities, thus requirements, obscure, civil servants neutral equal employment and lifelong service to any political leaders.7.The administrative task is to carry out the meaning of the written instructions and not others assume the personal responsibility.Through the comparison of the early administrative pattern, we can better understand the main advantages and Webber system differences.Webber system and it is the most important mode of various before the difference: the rule-based impersonal system replaced the personification of administrative management system.An organization and its rules than any of the people are important organization.Bureaucracy is its operation and how to respond to customer must is personified.As Weber has demonstrated that the modern office management “, will be incorporated into various regulations deeply touched it.The modern public administration by law theory, to command certain affairs authority has been awarded the legitimate public authority.This does not grant an institution specific cases through some instructions.It only matters is abstractly control some issues.In contrast, through personal privileges and give concession regulation of all affairs.The latter is completely dominated by the hereditary system, at least these affairs is not the traditional infringement is this situation.” It is very important.Early administration based on personal relationships, be loyal to relatives, protect, leaders or political, rather than on the system.Sometimes, the early administration is politically sensitive, because of the administrative organs of the staff is appointed, they also politicians arms or mainstream class.However, it is often autocratic, autocratic administration may be unfair, especially for those who can't or unwilling to input personal and political game.One of the basic principles for with weber impersonal system to completely eliminate autocraticbureaucracy theory is no longer considered particularly effective form of organization.Formal bureaucracy could have its advantages, but people think it often training to routineer and innovators, Encourage executives rather than risk aversion risk-taking, encourage them to waste instead of effective use of scarce resources.Webb was the bureaucracy is regarded as an ideal type “, ”but now this ideal type is inert, cultivate the progressive, leads to low efficiency, these mediocrity and is believed to be the public sector of the special disease.It is also criticized.Actually, the word “bureaucracy in today's more likely as low efficiency of synonyms.The new public management mode In the 1980s, the public sector is a traditional administrative pattern of new management methods of defects.This method can alleviate some of the problems of traditional pattern, also means that the public sector operation aspects has changed significantly.The new management method has many names: management of ”individualism“, ”the new public administration“, based on the market of public administration ”, after the bureaucracy model “or” entrepreneurial government “.To the late 1990s, people tend to use ”and the concept of new public administration“.Although the new public management, but for many of the names of public management of department of actual changes happened, people still have a consensus.First, no matter what, it is called mode with traditional represents a significant change of public administration, different more attention and managers of the individual responsibility.Second, it is clear to get rid of the classical bureaucracy, thereby organization, personnel, term and conditions more flexible.Third, it stipulates the organization and personnel, and it can target according to the performance indicators measuring task completion.Also, to plan the assessment system for more than ever before, and also can be more strictly determine whether the government plans to achieve its objectives.Fourth, the senior executives are more likely to color with political government work, rather than independent or neutral.Fifth, the more likely the inspection by the market, buyers of public service provider and distinguish ”helmsman, with the rower to distinguish“.Government intervention is not always refers to the government by means of bureaucracy.Sixth, appeared through privatization and market means such as inspection, contract of government function reduce trend.In some cases, it is fundamental.Once happened during the transformation from the important changes to all connected with this, the continuity of the steps are necessary.Holmes and Shand as a useful characteristics of generalization.They put the new public management paradigm, the good as management method has the following features:(1)it is a more strategic or structure of decision-making method(around the efficiency, quality and service).(2)decentralization type management environment replaced concentration level structure.The resource allocation and service delivery closer to supply, we can get more itself from the customers and related information and other interest groups.(3)can be more flexible to replace the method of public products supply directly, so as to provide cost savings of the policy.(4)concerned with the responsibility, authority as the key link of improving performance, including emphasize clear performance contract mechanism.(5)in the public sector, and between internal to create a competitive environment.(6)strengthen the strategic decision-making ability, which can quickly, flexible and low cost to manage multiple interests outside change and the response.(7)by request relevant results and comprehensive cost reports to improve transparency and responsibility.(8)general service budget and management system to support and encourage the change.The new public management and realize a result that no one in the best way.Managers in endowed with responsibility and without being told to get results.Decision is a management job duties, if not for achieving goals, managers should assume responsibility.Conclusion The government management over the past 150 years experienced three modes.First is the personification of modern administrative mode, or when the pattern of its defects and increasingly exposed to improve efficiency, it is the second mode of traditional bureaucracy model is replaced.Similarly, when the traditional administrative mode problems, it is the third model is the new public management, from the government to alternative market.Since 1980s, the dominance of the market as the 1920s to 1960s dominant bureaucracy.In any kind of government, market and bureaucratic system are coexisting, just a form at some stage dominant, and in another stage of another kind of form, the dominant.The new public management is increasingly weakened and bureaucracy in the public administration field market dominant period.In reality, the market and bureaucracy, mutual complement each other.The new public management may not be completely replace the bureaucracy, as in 1989, the eastern Europe before bureaucracy could not instead of the market.But the new public management movement is early traditional bureaucracy, many functions can be and often by market now.In a bureaucracy system for organizational principle is weakened environment, market solutions will be launched.Of course not all market prescription can succeed, but this is not the issue.The government of new public management will be a toolbox dowsed solutions.If the scheme of the ineffective, the government will from the same source for other solutions.The theory behind the government management has already happened, we can use the term ”paradigm" to describe it.In public administration academia, many of the new public management denial of critics.But their criticism of the government reform quickly.In the new public management mode, another a kind of new mode, but certainly not returned to the traditional administrative pattern.
第四篇:室内设计中英文翻译【适用于毕业论文外文翻译】
毕业设计英文资料翻译
Translation of the English Documents for Graduation Design
课题名称
院(系)专 业 姓 名 学 号 起讫日期 指导教师
2011 年 02 月 20 日
Interior Design
Susan Yelavich
Interior design embraces not only the decoration and furnishing of space, but also considerations of space planning, lighting, and programmatic issues pertaining to user behaviors, ranging from specific issues of accessibility to the nature of the activities to be conducted in the space.The hallmark of interior design today is a new elasticity in typologies, seen most dramatically in the domestication of commercial and public spaces.Interior design encompasses both the programmatic planning and physical treatment of interior space: the projection of its use and the nature of its furnishings and surfaces, that is, walls, floors, and ceilings.Interior design is distinguished from interior decoration in the scope of its purview.Decorators are primarily concerned with the selection of furnishings, while designers integrate the discrete elements of décor into programmatic concerns of space and use.Interior designers generally practice collaboratively with architects on the interiors of spaces built from the ground up, but they also work independently, particularly in the case of renovations.There is also a strong history of architect-designed interiors, rooted in the concept of Gesamtkunstwerk, the total work of art, that came out of the Arts & Crafts movement of the late nineteenth and early twentieth century.It is no accident that its strongest proponents(from Frank Lloyd Wright to Mies van der Rohe)extended their practices to include the realm of interiors during the nascency of the interior-design profession.Indeed, it was a defensive measure taken by architects who viewed formal intervention by an interior decorator or designer as a threat to the integrity of their aesthetic.Today, apart from strict modernists like Richard Meier who place a premium on homogeneity, architects who take on the role of interior designer(and their numbers are growing)are more likely to be eclectic in philosophy and practice, paralleling the twenty-first century's valorization of plurality.Nonetheless, the bias against interior designers and the realm
of the interior itself continues to persist.Critical discussions of the interior have been hampered by its popular perception as a container of ephemera.Furthermore, conventional views of the interior have been fraught with biases: class biases related to centuries-old associations with tradesmen and gender biases related to the depiction of the decorating profession as primarily the domain of women and gay men.As a result, the credibility of the interior as an expression of cultural values has been seriously impaired.However, the conditions and the light in which culture-at-large is understood are changing under the impact of globalization.The distinctions between “high” culture and “low” culture are dissipating in a more tolerant climate that encourages the cross-fertilization between the two poles.Likewise, there are more frequent instances of productive borrowings among architecture, design, and decoration, once considered exclusive domains.And while the fields of architecture, interior design, and interior decoration still have different educational protocols and different concentrations of emphasis, they are showing a greater mutuality of interest.Another way to think of this emergent synthesis is to substitute the triad of “architecture, interior design, and decoration” with “modernity, technology, and history.” One of the hallmarks of the postmodern era is a heightened awareness of the role of the past in shaping the present.In the interior, this manifests itself in a renewed interest in ornament, in evidence of craft and materiality, and in spatial complexities, all running parallel to the ongoing project of modernity.Even more significantly, there is a new elasticity in typologies.Today, the traditional typologies of the interior—house, loft, office, restaurant, and so on—strain to control their borders.Evidence of programmatic convergences can clearly be seen in public and commercial spaces that aspire to be both more user-friendly and consumer-conscious.Growing numbers of private hospitals(in competition for patients)employ amenities and form languages inspired by luxury spas;at the same time, many gyms and health clubs are adopting the clinical mien of medical facilities to convince their clients of the value of their services.The same relaxation of interior protocols can be seen in offices that co-opt the informal, live-work ethic of the artist's loft, and in hotels that use the language(and contents)of galleries.Similarly, increasing numbers of grocery stores and bookstores include spaces and furniture for eating and socializing.Likewise, there is a new comfort with stylistic convergences in interiors that appropriate and recombine disparate quotations from design history.These are exemplified in spaces such as Rem Koolhaas' Casa da Musica(2005)in Porto, Portugal(with its inventive use of traditional Portuguese tiles), and Herzog & de Meuron's Walker Art Center(2005)in Minneapolis, Minnesota(where stylized acanthus-leaf patterns are used to mark gallery entrances).These interiors make an art out of hybridism.They do not simply mix and match period furnishings and styles, but refilter them through a contemporary lens.Another hallmark of the contemporary interior is the overt incorporation of narrative.Tightly themed environments persist in retail spaces such as Ralph Lauren's clothing stores and in entertainment spaces like Las Vegas casinos.However, a more playful and less linear approach to narrative is increasingly common.Of all the typologies of the interior, the residence has been least affected by change, apart from ephemeral trends such as outdoor kitchens and palatial bathrooms.However, the narrative of the residence dominates interior design at large.It has become the catalyst for rethinking a host of spaces once firmly isolated from it, ranging from the secretary's cubicle, to the nurse's station, to the librarian's reading room.Considerations such as the accommodation of personal accessories in the work space, the use of color in hospitals, and the provision of couches in libraries are increasingly common, to cite just three examples.The domestication of such environments(with curtains and wallpaper, among other residential elements)provides more comfort, more reassurance, and more pleasure to domains formerly defined by institutional prohibitions and social exclusions.Unquestionably, these changes in public and commercial spaces are indebted to the liberation movements of the late 1960s.The battles fought against barriers of race, class, gender, and physical ability laid the groundwork for a larger climate of hospitality and accommodation.It is also possible to detect a wholly other agenda in the popularity of the residential model.The introduction of domestic amenities into commercial spaces, such as recreation spaces in office interiors, can also be construed as part of a wider attempt to put a more acceptable face on the workings of free-market capitalism.In this view, interior design dons the mask of entertainment.There is nothing new about the charade.Every interior is fundamentally a stage set.Nor is it particularly insidious—as long as the conceit is transparent.Danger surfaces,however, when illusion becomes delusion—when design overcompensates for the realities of illness with patronizing sentiment, or when offices become surrogate apartments because of the relentless demands of a round-the-clock economy.In these instances, design relinquishes its potential to transform daily life in favor of what amounts to little more than a facile re-branding of space.Another force is driving the domestication of the interior and that is the enlarged public awareness of design and designers.There is a growing popular demand for design as amenity and status symbol, stimulated by the proliferation of shelter magazines, television shows devoted to home decorating, and the advertising campaigns of commercial entities such as Target and Ikea.In the Western world, prosperity, combined with the appetite of the media, has all but fetishized the interior, yielding yet another reflection of the narcissism of a consumer-driven society.On the one hand, there are positive, democratic outcomes of the growing public profile of design that can be seen in the rise of do-it-yourself web sites and enterprises like Home Depot that emphasize self-reliance.It can also be argued, more generally, that the reconsideration of beauty implicit in the valorization of design is an ameliorating social phenomenon by virtue of its propensity to inspire improvement.On the other hand, the popularization of interior design through personas such as Philippe Starck, Martha Stewart, and Barbara Barry has encouraged a superficial understanding of the interior that is more focused on objects than it is on behaviors and interactions among objects.For all the recent explosion of interest in interior design, it remains, however, a fundamentally conservative arena of design, rooted as it is in notions of enclosure, security, and comfort.This perception has been exacerbated by the growth of specialized practices focused, for example, on healthcare and hospitality.While such firms offer deep knowledge of the psychology, mechanics, and economies of particular environments, they also perpetuate distinctions that hinder a more integral approach to the interior as an extension of architecture and even the landscape outside.One notable exception is the growth of design and architecture firms accruing expertise in sustainable materials and their applications to the interior.At the same time that design firms are identifying themselves with sustainability and promoting themselves as environmentalists, a movement is building to incorporate environmental responsibility within normative practice.Over the past four decades, efforts have intensified to professionalize the field of interior design and to accord it a status equal to that of architecture.In the US and Canada the Council for Interior Design Accreditation, formerly known as FIDER, reviews interior design education programs at colleges and universities to regulate standards of practice.Furthermore, the International Council of Societies of Industrial Design(ICSID)embraces interior design within its purview, defining it as part of “intellectual profession, and not simply a trade or a service for enterprises.”
Yet, the education of interior designers remains tremendously variable, with no uniformity of pedagogy.Hence, interior design continues to be perceived as an arena open to the specialist and the amateur.This perception is indicative of both the relatively short history of the profession itself and the broader cultural forces of inclusion and interactivity that mark a global society.原文来源:
Board of International Research in Design,Design Dictionary Perspectives on Design Terminology,Birkhäuser Verlag AG 2008
第五篇:中英文翻译
特种加工工艺
介绍
传统加工如车削、铣削和磨削等,是利用机械能将金属从工件上剪切掉,以加工成孔或去除余料。特种加工是指这样一组加工工艺,它们通过各种涉及机械能、热能、电能、化学能或及其组合形式的技术,而不使用传统加工所必需的尖锐刀具来去除工件表面的多余材料。
传统加工如车削、钻削、刨削、铣削和磨削,都难以加工特别硬的或脆性材料。采用传统方法加工这类材料就意味着对时间和能量要求有所增加,从而导致成本增加。在某些情况下,传统加工可能行不通。由于在加工过程中会产生残余应力,传统加工方法还会造成刀具磨损,损坏产品质量。基于以下各种特殊理由,特种加工工艺或称为先进制造工艺,可以应用于采用传统加工方法不可行,不令人满意或者不经济的场合:
1.对于传统加工难以夹紧的非常硬的脆性材料; 2.当工件柔性很大或很薄时; 3.当零件的形状过于复杂时;
4.要求加工出的零件没有毛刺或残余应力。
传统加工可以定义为利用机械(运动)能的加工方法,而特种加工利用其他形式的能量,主要有如下三种形式: 1.热能; 2.化学能; 3.电能。
为了满足额外的加工条件的要求,已经开发出了几类特种加工工艺。恰当地使用这些加工工艺可以获得很多优于传统加工工艺的好处。常见的特种加工工艺描述如下。
电火花加工
电火花加工是使用最为广泛的特种加工工艺之一。相比于利用不同刀具进行金属切削和磨削的常规加工,电火花加工更为吸引人之处在于它利用工件和电极间的一系列重复产生的(脉冲)离散电火花所产生的热电作用,从工件表面通过电腐蚀去除掉多余的材料。
传统加工工艺依靠硬质刀具或磨料去除较软的材料,而特种加工工艺如电火花加工,则是利用电火花或热能来电蚀除余料,以获得所需的零件形状。因此,材料的硬度不再是电火花加工中的关键因素。
电火花加工是利用存储在电容器组中的电能(一般为50V/10A量级)在工具电极(阴极)和工件电极(阳极)之间的微小间隙间进行放电来去除材料的。如图6.1所示,在EDM操作初始,在工具电极和工件电极间施以高电压。这个高电压可以在工具电极和工件电极窄缝间的绝缘电介质中产生电场。这就会使悬浮在电介质中的导电粒子聚集在电场最强处。当工具电极和工件电极之间的势能差足够大时,电介质被击穿,从而在电介质流体中会产生瞬时电火花,将少量材料从工件表面蚀除掉。每次电火花所蚀除掉的材料量通常在10-5~10-6mm3范围内。电极之间的间隙只有千分之几英寸,通过伺服机构驱动和控制工具电极的进给使该值保持常量。化学加工
化学加工是众所周知的特种加工工艺之一,它将工件浸入化学溶液通过腐蚀溶解作用将多余材料从工件上去除掉。该工艺是最古老的特种加工工艺,主要用于凹腔和轮廓加工,以及从具有高的比刚度的零件表面去除余料。化学加工广泛用于为多种工业应用(如微机电系统和半导体行业)制造微型零件。
化学加工将工件浸入到化学试剂或蚀刻剂中,位于工件选区的材料通过发生在金属溶蚀或化学溶解过程中的电化学微电池作用被去除掉。而被称为保护层的特殊涂层所保护下的区域中的材料则不会被去除。不过,这种受控的化学溶解过程同时也会蚀除掉所以暴露在表面的材料,尽管去除的渗透率只有0.0025~0.1 mm/min。该工艺采用如下几种形式:凹坑加工、轮廓加工和整体金属去除的化学铣,在薄板上进行蚀刻的化学造型,在微电子领域中利用光敏抗蚀剂完成蚀刻的光化学加工(PCM),采用弱化学试剂进行抛光或去毛刺的电化学抛光,以及利用单一化学活性喷射的化学喷射加工等。如图6.2a所示的化学加工示意图,由于蚀刻剂沿垂直和水平方向开始蚀除材料,钻蚀(又称为淘蚀)量进一步加大,如图6.2b所示的保护体边缘下面的区域。在化学造型中最典型的公差范围可保持在材料厚度的±10%左右。为了提高生产率,在化学加工前,毛坯件材料应采用其他工艺方法(如机械加工)进行预成形加工。湿度和温度也会导致工件尺寸发生改变。通过改变蚀刻剂和控制工件加工环境,这种尺寸改变可以减小到最小。
电化学加工
电化学金属去除方法是一种最有用的特种加工方法。尽管利用电解作用作为金属加工手段是近代的事,但其基本原理是法拉第定律。利用阳极溶解,电化学加工可以去除具有导电性质工件的材料,而无须机械能和热能。这个加工过程一般用于在高强度材料上加工复杂形腔和形状,特别是在航空工业中如涡轮机叶片、喷气发动机零件和喷嘴,以及在汽车业(发动机铸件和齿轮)和医疗卫生业中。最近,还将电化学加工应用于电子工业的微加工中。
图6.3所示的是一个去除金属的电化学加工过程,其基本原理与电镀原理正好相反。在电化学加工过程中,从阳极(工件)上蚀除下的粒子移向阴极(加工工具)。金属的去除由一个合适形状的工具电极来完成,最终加工出来的零件具有给定的形状、尺寸和表面光洁度。在电化学加工过程中,工具电极的形状逐渐被转移或复制到工件上。型腔的形状正好是与工具相匹配的阴模的形状。为了获得电化学过程形状复制的高精度和高的材料去除率,需要采用高的电流密度(范围为10~100 A/cm2)和低电压(范围为8~30V)。通过将工具电极向去除工件表面材料的方向进给,加工间隙要维持在0.1 mm范围内,而进给率一般为0.1~20 mm/min左右。泵压后的电解液以高达5~50 m/s的速度通过间隙,将溶解后的材料、气体和热量带走。因此,当被蚀除的材料还没来得及附着到工具电极上时,就被电解液带走了。
作为一种非机械式金属去除加工方法,ECM可以以高切削量加工任何导电材料,而无须考虑材料的机械性能。特别是在电化学加工中,材料去除率与被加工件的硬度、韧性及其他特性无关。对于利用机械方法难于加工的材料,电化学加工可以保证将该材料加工出复杂形状的零件,这就不需要制造出硬度高于工件的刀具,而且也不会造成刀具磨损。由于工具和工件间没有接触,电化学加工是加工薄壁、易变形零件及表面容易破裂的脆性材料的首选。激光束加工
LASER是英文Light Amplification by Stimulated Emission of Radiation 各单词头一个字母所组成的缩写词。虽然激光在某些场合可用来作为放大器,但它的主要用途是光激射振荡器,或者是作为将电能转换为具有高度准直性光束的换能器。由激光发射出的光能具有不同于其他光源的特点:光谱纯度好、方向性好及具有高的聚焦功率密度。
激光加工就是利用激光和和靶材间的相互作用去除材料。简而言之,这些加工工艺包括激光打孔、激光切割、激光焊接、激光刻槽和激光刻划等。
激光加工(图6.4)可以实现局部的非接触加工,而且对加工件几乎没有作用力。这种加工工艺去除材料的量很小,可以说是“逐个原子”地去除材料。由于这个原因,激光切削所产生的切口非常窄。激光打孔深度可以控制到每个激光脉冲不超过一微米,且可以根据加工要求很灵活地留下非常浅的永久性标记。采用这种方法可以节省材料,这对于贵重材料或微加工中的精密结构而言非常重要。可以精确控制材料去除率使得激光加工成为微制造和微电子技术中非常重要的加工方法。厚度小于20 mm的板材的激光切割加工速度快、柔性好、质量高。另外,通过套孔加工还可有效实现大孔及复杂轮廓的加工。
激光加工中的热影响区相对较窄,其重铸层只有几微米。基于此,激光加工的变形可以不予考虑。激光加工适用于任何可以很好地吸收激光辐射的材料,而传统加工工艺必须针对不同硬度和耐磨性的材料选择合适的刀具。采用传统加工方法,非常难以加工硬脆材料如陶瓷等,而激光加工是解决此类问题的最好选择。
激光切割的边缘光滑且洁净,无须进一步处理。激光打孔可以加工用其他方法难以加工的高深径比的孔。激光加工可以加工出高质量的小盲孔、槽、表面微造型和表面印痕。激光技术正处于高速发展期,激光加工也如此。激光加工不会挂渣,没有毛边,可以精确控制几何精度。随着激光技术的快速发展,激光加工的质量正在稳步提高。
超声加工
超声加工为日益增长的对脆性材料如单晶体、玻璃、多晶陶瓷材料的加工需求及不断提高的工件复杂形状和轮廓加工提供了解决手段。这种加工过程不产生热量、无化学反应,加工出的零件在微结构、化学和物理特性方面都不发生变化,可以获得无应力加工表面。因此,超声加工被广泛应用于传统加工难以切削的硬脆材料。在超声加工中,实际切削由液体中的悬浮磨粒或者旋转的电镀金刚石工具来完成。超声加工的变型有静止(传统)超声加工和旋转超声加工。
传统的超声加工是利用作为小振幅振动的工具与工件之间不断循环的含有磨粒的浆料的磨蚀作用去除材料的。成形工具本身并不磨蚀工件,是受激振动的工具通过激励浆料液流中的磨料不断缓和而均匀地磨损工件,从而在工件表面留下与工具相对应的精确形状。音极工具振动的均匀性使超声加工只能完成小型零件的加工,特别是直径小于100 mm 的零件。
超声加工系统包括音极组件、超声发生器、磨料供给系统及操作人员的控制。音极是暴露在超声波振动中的一小块金属或工具,它将振动能传给某个元件,从而激励浆料中的磨粒。超声加工系统的示意图如图6.5所示。音极/工具组件由换能器、变幅杆和音极组成。换能器将电脉冲转换成垂直冲程,垂直冲程再传给变幅杆进行放大或压抑。调节后的冲程再传给音极/工具组件。此时,工具表面的振动幅值为20~50μm。工具的振幅通常与所使用的磨粒直径大致相等。
磨料供给系统将由水和磨粒组成的浆料送至切削区,磨粒通常为碳化硅或碳化硼。另外,除了提供磨粒进行切削外,浆料还可对音极进行冷却,并将切削区的磨粒和切屑带走。
Nontraditional Machining Processes Introduction
Traditional or conventional machining, such as turning, milling, and grinding etc., uses mechanical energy to shear metal against another substance to create holes or remove material.Nontraditional machining processes are defined as a group of processes that remove excess material by various techniques involving mechanical, thermal, electrical or chemical energy or combinations of these energies but do not use a sharp cutting tool as it is used in traditional manufacturing processes.Extremely hard and brittle materials are difficult to be machined by traditional machining processes.Using traditional methods to machine such materials means increased demand for time and energy and therefore increases in costs;in some cases traditional machining may not be feasible.Traditional machining also results in tool wear and loss of quality in the product owing to induced residual stresses during machining.Nontraditional machining processes, also called unconventional machining process or advanced manufacturing processes, are employed where traditional machining processes are not feasible, satisfactory or economical due to special reasons as outlined below: 1.Very hard fragile materials difficult to clamp for traditional machining;2.When the workpiece is too flexible or slender;3.When the shape of the part is too complex;4.Parts without producing burrs or inducing residual stresses.Traditional machining can be defined as a process using mechanical(motion)energy.Non-traditional machining utilizes other forms of energy;the three main forms of energy used in non-traditional machining processes are as follows: 1.Thermal energy;2.Chemical energy;3.Electrical energy.Several types of nontraditional machining processes have been developed to meet extra required machining conditions.When these processes are employed properly, they offer many advantages over traditional machining processes.The common nontraditional machining processes are described in the following section.Electrical Discharge Machining(EDM)
Electrical discharge machining(EDM)sometimes is colloquially referred to as spark machining, spark eroding, burning, die sinking or wire erosion.It is one of the most widely used non-traditional machining processes.The main attraction of EDM over traditional machining processes such as metal cutting using different tools and grinding is that this technique utilizes thermoelectric process to erode undesired materials from the workpiece by a series of rapidly recurring discrete electrical sparks between workpiece and electrode.The traditional machining processes rely on harder tool or abrasive material to remove softer material whereas nontraditional machining processes such as EDM uses electrical spark or thermal energy to erode unwanted material in order to create desired shapes.So, the hardness of the material is no longer a dominating factor for EDM process.EDM removes material by discharging an electrical current, normally stored in a capacitor bank, across a small gap between the tool(cathode)and the workpiece(anode)typically in the order of 50 volts/10amps.As shown in Fig.6.1, at the beginning of EDM operation, a high voltage is applied across the narrow gap between the electrode and the workpiece.This high voltage induces an electric field in the insulating dielectric that is present in narrow gap between electrode and workpiece.This causes conducting particles suspended in the dielectric to concentrate at the points of strongest electrical field.When the potential difference between the electrode and the workpiece is sufficiently high, the dielectric breaks down and a transient spark discharges through the dielectric fluid, removing small amount of material from the workpiece surface.The volume of the material removed per spark discharge is typically in the range of 10-5 to 10-6 mm3.The gap is only a few thousandths of an inch, which is maintained at a constant value by the servomechanism that actuates and controls the tool feed.Chemical Machining(CM)
Chemical machining(CM)is a well known non-traditional machining process in which metal is removed from a workpiece by immersing it into a chemical solution.The process is the oldest of the nontraditional processes and has been used to produce pockets and contours and to remove materials from parts having a high strength-to-weight ratio.Moreover, the chemical machining method is widely used to produce micro-components for various industrial applications such as microelectromechanical systems(MEMS)and semiconductor industries.In CM material is removed from selected areas of workpiece by immersing it in a chemical reagents or etchants, such as acids and alkaline solutions.Material is removed by microscopic electrochemical cell action which occurs in corrosion or chemical dissolution of a metal.Special coatings called maskants protect areas from which the metal is not to be removed.This controlled chemical dissolution will simultaneously etch all exposed surfaces even though the penetration rates of the material removed may be only 0.0025-0.1mm/min.The basic process takes many forms: chemical milling of pockets, contours, overall metal removal, chemical blanking for etching through thin sheets;photochemical machining(pcm)for etching by using of photosensitive resists in microelectronics;chemical or electrochemical polishing where weak chemical reagents are used(sometimes with remote electric assist)for polishing or deburring and chemical jet machining where a single chemically active jet is used.A schematic of chemical machining process is shown in Fig.6.2a.Because the etchant attacks the material in both vertical and horizontal directions, undercuts may develop(as shown by the areas under the edges of the maskant in Fig.6.2b).Typically, tolerances of ±10% of the material thickness can be maintained in chemical blanking.In order to improve the production rate, the bulk of the workpiece should be shaped by other processes(such as by machining)prior to chemical machining.Dimensional variations can occur because of size changes in workpiece due to humidity and temperature.This variation can be minimized by properly selecting etchants and controlling the environment in the part generation and the production area in the plant.Electrochemical Machining(ECM)
Electrochemical metal removal is one of the more useful nontraditional machining processes.Although the application of electrolytic machining as a metal-working tool is relatively new, the basic principles are based on Faraday laws.Thus, electrochemical machining can be used to remove electrically conductive workpiece material through anodic dissolution.No mechanical or thermal energy is involved.This process is generally used to machine complex cavities and shapes in high-strength materials, particularly in the aerospace industry for the mass production of turbine blades, jet-engine parts, and nozzles, as well as in the automotive(engines castings and gears)and medical industries.More recent applications of ECM include micromachining for the electronics industry.Electrochemical machining(ECM), shown in Fig.6.3, is a metal-removal process based on the principle of reverse electroplating.In this process, particles travel from the anodic material(workpiece)toward the cathodic material(machining tool).Metal removal is effected by a suitably shaped tool electrode, and the parts thus produced have the specified shape, dimensions, and surface finish.ECM forming is carried out so that the shape of the tool electrode is transferred onto, or duplicated in, the workpiece.The cavity produced is the female mating image of the tool shape.For high accuracy in shape duplication and high rates of metal removal, the process is operated at very high current densities of the order 10-100 A/cm2,at relative low voltage usually from 8 to 30 V, while maintaining a very narrow machining gap(of the order of 0.1 mm)by feeding the tool electrode with a feed rate from 0.1 to 20 mm/min.Dissolved material, gas, and heat are removed from the narrow machining gap by the flow of electrolyte pumped through the gap at a high velocity(5-50 m/s), so the current of electrolyte fluid carries away the deplated material before it has a chance to reach the machining tool.Being a non-mechanical metal removal process, ECM is capable of machining any electrically conductive material with high stock removal rates regardless of their mechanical properties.In particular, removal rate in ECM is independent of the hardness, toughness and other properties of the material being machined.The use of ECM is most warranted in the manufacturing of complex-shaped parts from materials that lend themselves poorly to machining by other, above all mechanical methods.There is no need to use a tool made of a harder material than the workpiece, and there is practically no tool wear.Since there is no contact between the tool and the work, ECM is the machining method of choice in the case of thin-walled, easily deformable components and also brittle materials likely to develop cracks in the surface layer.Laser Beam Machining(LBM)
LASER is an acronym for Light Amplification by Stimulated Emission of Radiation.Although the laser is used as a light amplifier in some applications, its principal use is as an optical oscillator or transducer for converting electrical energy into a highly collimated beam of optical radiation.The light energy emitted by the laser has several characteristics which distinguish it from other light sources: spectral purity, directivity and high focused power density.Laser machining is the material removal process accomplished through laser and target material interactions.Generally speaking, these processes include laser drilling, laser cutting, laser welding, and laser grooving, marking or scribing.Laser machining(Fig.6.4)is localized, non-contact machining and is almost reacting-force free.This process can remove material in very small amount and is said to remove material “atom by atom”.For this reason, the kerf in laser cutting is usually very narrow , the depth of laser drilling can be controlled to less than one micron per laser pulse and shallow permanent marks can be made with great flexibility.In this way material can be saved, which may be important for precious materials or for delicate structures in micro-fabrications.The ability of accurate control of material removal makes laser machining an important process in micro-fabrication and micro-electronics.Also laser cutting of sheet material with thickness less than 20mm can be fast, flexible and of high quality, and large holes or any complex contours can be efficiently made through trepanning.Heat Affected Zone(HAZ)in laser machining is relatively narrow and the re-solidified layer is of micron dimensions.For this reason, the distortion in laser machining is negligible.LBM can be applied to any material that can properly absorb the laser irradiation.It is difficult to machine hard materials or brittle materials such as ceramics using traditional methods, laser is a good choice for solving such difficulties.Laser cutting edges can be made smooth and clean, no further treatment is necessary.High aspect ratio holes with diameters impossible for other methods can be drilled using lasers.Small blind holes, grooves, surface texturing and marking can be achieved with high quality using LBM.Laser technology is in rapid progressing, so do laser machining processes.Dross adhesion and edge burr can be avoided, geometry precision can be accurately controlled.The machining quality is in constant progress with the rapid progress in laser technology.Ultrasonic Machining(USM)
Ultrasonic machining offers a solution to the expanding need for machining brittle materials such as single crystals, glasses and polycrystalline ceramics, and for increasing complex operations to provide intricate shapes and workpiece profiles.This machining process is non-thermal, non-chemical, creates no change in the microstructure, chemical or physical properties of the workpiece and offers virtually stress-free machined surfaces.It is therefore used extensively in machining hard and brittle materials that are difficult to cut by other traditional methods.The actual cutting is performed either by abrasive particles suspended in a fluid, or by a rotating diamond-plate tool.These variants are known respectively as stationary(conventional)ultrasonic machining and rotary ultrasonic machining(RUM).Conventional ultrasonic machining(USM)accomplishes the removal of material by the abrading action of a grit-loaded slurry, circulating between the workpiece and a tool that is vibrated with small amplitude.The form tool itself does not abrade the workpiece;the vibrating tool excites the abrasive grains in the flushing fluid, causing them to gently and uniformly wear away the material, leaving a precise reverse from of the tool shape.The uniformity of the sonotrode-tool vibration limits the process to forming small shapes typically under 100 mm in diameter.The USM system includes the Sonotrode-tool assembly, the generator, the grit system and the operator controls.The sonotrode is a piece of metal or tool that is exposed to ultrasonic vibration, and then gives this vibratory energy in an element to excite the abrasive grains in the slurry.A schematic representation of the USM set-up is shown in Fig.6.5.The sonotrode-tool assembly consists of a transducer, a booster and a sonotrode.The transducer converts the electrical pulses into vertical stroke.This vertical stroke is transferred to the booster, which may amplify or suppress the stroke amount.The modified stroke is then relayed to the sonotrode-tool assembly.The amplitude along the face of the tool typically falls in a 20 to 50 μm range.The vibration amplitude is usually equal to the diameter of the abrasive grit used.The grit system supplies a slurry of water and abrasive grit, usually silicon or boron carbide, to the cutting area.In addition to providing abrasive particles to the cut, the slurry also cools the sonotrode and removes particles and debris from the cutting area.