Thursday, October 31, 2019

Strategic Marketing- Coca Cola Essay Example | Topics and Well Written Essays - 2750 words

Strategic Marketing- Coca Cola - Essay Example Coca-Cola is one of the leading food and beverage country with a geographical reach that extends to over 200 countries around the world. Coca-Cola manufactures, distributes and sells over 3,500 non-alcoholic beverages that range from drinking water to sports drinks. Coca-Cola is world-known for their soft drinks and most popularly its namesake Coca-Cola. The main product that Coca-Cola sells is its carbonated drinks such as Coca-Cola and its different variations that include Diet Coke, Coke with lime, Coca-Cola Blak and Coca-Cola Orange. The Coca-Cola Company began as J.S. Pemberton Medicine Company that sold medicinal products such as Cough syrup and hair dye. Later the co-founder of Coca Cola, Dr. John Pemberton, a pharmacist, discovered the formula for Coke, quite by accident. Soon J. S. Pemberton Medicine Company became Coca-Cola and began to operate as a beverage company. The revolutionary taste of Coke soon became a preferred taste for the consumers and Coke became a symbol of ‘Open Happiness’. The beverage industry is one the most growing industries as consumers’ preference has gradually shifted from drinking water to soft drinks and even to energy drinks. Thus Coca-Cola faces immense competition from other beverage industries, with the top competitors being Dr. Pepper Snapple Group, Inc., Nestl and Pepsico, Inc. (Yahoo Finance, 2011). In this report, we will develop a marketing plan for 2011-2012 for Coca-cola to be presented to the board of directors at Coca-Cola. The marketing plan will discuss the current position of Coca-Cola in the market, using marketing audit that analysis both the micro and macro environment for the company. MARKETING AUDIT MACRO ENVIRONMENT- PESTLE ANALYSIS Political Factors With the growing consumer awareness towards the food and beverage industry, many have become concerned over the power and impact of junk food over the children and teenagers. Coke has been easily termed as junk food that contains empty calories which contribute toward s the number of obese in the world. With the consumer concern growing, governments may be forced to take action against Coke and other junk foods. Since Coke is an international brand, there is always concern over the growing instability in certain countries which has been on the rise in the last few years. Since Coke is originally an American brand, it is impacted by the growing Anti-American sentiment in the Gulf and certain Asian countries. However, with globalization on the increase, Coke can benefit from emerging and developing markets where demand for Coke will increase even higher. Economic Factors The unmarked recession that began after the US war against terrorism has not just impacted America but also the rest of the world. As the recession continues to take hold, the buyer power of the consumers is greatly reduced. Consumers are moving from luxury items to items of necessity and even at that, they are looking for discounts and bargains. The instability and period of near war that is prevalent in many countries including London, also impacting buyer power and preference of the consumer. Also with the escalating oil prices, production and transportation costs have gone up considerably higher which has resulted in increased prices of the product. The same product is now available at higher prices and at a time of recession. However, the advantage for Coca-Cola is that their manufacturing plant is located in every city where they market their product which considerably decreases transportation costs. Socio-Cultural Factors The recent focus on health and nutrition has led consumers to consider buying carbonated and other drinks that negatively impact their health. Thus there has been a decrease in the demand of traditional Coca Cola products, that are carbonated drink, among the consumers especially baby boomers.

Tuesday, October 29, 2019

Jury Nullification Case Study Example | Topics and Well Written Essays - 2250 words

Jury Nullification - Case Study Example v. Morgentaler’s case whereby the cited law did not adequately apply (R. v. Morgentaler, 1988). However, this has always been the norm due to the de facto power granted to juries whereby despite judge’s role of instructing and advising them to act based on the law in question cannot interfere with their final verdicts. For instance, in R. v. Morgentaler’s case the accused were acquitted after the jury found s.251, which they argued violated women’s privileges was true and did not in anyway seem to hold them accountable for their actions. In most cases, jury nullification as evident in the case aforementioned prompt some individuals especially those who have done grievous crimes prefer their arbitration because they are aware of receiving fair judgments incomparable to the magnitude of their charges or all together acquitted. This is evident in R. v. Morgentaler’s case though the involved parties did not prompt the same but owing to then unfolding ci rcumstances about laws’ interpretation disregarded the charges, hence acquitting the accused (R. v. Morgentaler, 1988). Acquittal occurs if jury finds the stated law inapplicable, oppressive as well as unpopular based on their interpretation and other aspects that may influence their irrevocable verdicts them like morality. For instance, in R. v. ... What do you think of jury nullification? Despite numerous negative responses anti jury nullification, I think its role is more of upholding the execution of justice with consideration of morality. However, this in many incidences may differ with both judge and claimants’ anticipations concerning varied laws, which they cite the accused might have infringed based on the case at hand. Since, in all their undertakings and verdicts juries make certain fundamental considerations whose core purpose is to ensure fair trial of all parties involved in the case. However, due to their contrary verdicts to those of the involved parties may seem either unwise or favoring a particular party/side. This is especially evident when the jury nullifies a law that renders one guilty of having violated whereby with the aid of their interpretation pronounce it being conflicting. Hence, the accused acquitted for having done wrong as in the R. v. Morgentaler case where the claimant was very sure the s pecialists were quilt. However, the case overturned when the specialists cited s.251 violated women’s rights by compelling them to carry to term fetus that may in process subject them to both emotional and psychological distress (R. v. Morgentaler, 1988). This is upholding of morality, justice and vulnerable people’s rights as well as protecting those who may not have adequate knowledge concerning interpretation of a certain laws. However, with the intervention of jury the accused end up getting fair judgment or acquitted if the law is oppressive or unpopular as in the case R. v. Morgentaler where the prosecution’s side ended up using another law to defend the cited infringed law (R. v. Morgentaler, 1988). Based on my opinion, this does not imply judges compared to juries are

Sunday, October 27, 2019

Differences in Grid and Air Gap Techniques

Differences in Grid and Air Gap Techniques Introduction In this chapter, a literature review was carried out so that adequate information about the differences in grid and air-gap technique could be gathered with emphasis on why these techniques are important in plain radiography of the lateral hip. These two techniques will be analysed to explain better the acquisition of the image. Both techniques will be compared and their advantages and disadvantages discussed. This was done by means of radiography books and journals. Where, possible primary sources of information were chosen. However, original studies could not always be obtained and secondary sources had to be considered. The use of the internet was also important as it served as a source for and access to relevant articles. Related literature was mainly sourced using the online data bases of EBSCO ®, CINHAL ® and Pubmed ® as well as the Institute of Health Cares library and facilities. Image quality Image quality refers to the ability to view the anatomical structure under study with precision and which makes it possible to identify and spot any abnormalities (Bushong, 2008). The quality of the image depends on several physical and physiological factors and this makes it hard to measure. Image quality; Is defined in radiological terminology as the relationship between the structures of a test sample to be irradiated with x-rays and the parameters of its visualisation (Hertrich, 2005, pg.244) According to Bushong (2008), the most important factors that improve or degrade image quality are contrast resolution, spatial resolution, noise and artefacts. Image quality cannot be measured in a precise way since the quality of radiographs is hard to define (Bushong, 2008). In digital radiography (DR) the image quality depends on a number of characteristics that can change the viewing experience. One of these factors is frequency, which is a measure of the total amount of contrast within the image. This characteristic depends on the raw data (x-ray energy) that the imaging detector absorbs. The frequency of the image is represented by different grey scale levels that show the density of a particular part of the anatomical structure. This is how contrast is affected. A high contrast image has high frequency due to the amount of x-ray photons absorbed (Carlton Adler, 2006). Image quality is also subjective and depends on the viewer (Sherer et al., 2006). Different people may need to alter the quality of the image by increasing or reducing the contrast or by changing the sharpness of the image depending on their individual visual abilities (Dendy Heaton, 2003). Dendy and Heaton (2003) argue that image quality also depends on the display system and the way the image is produced. The authors further argue that room lighting also affects image quality and might also diminish image quality. Image contrast Image contrast refers to the difference in densities between adjacent anatomical structures. The amount of contrast produced on an image depends on the structural characteristic of the anatomical part of the body as well as the characteristics of the x-ray beam when it penetrates the patients body. Contrast depends on the attenuations within the patients body due to different densities in various parts of the body. The higher the difference in densities, the higher is the contrast (Sherer et al., 2006). However, small changes in densities of structures would not be detected on a high contrast image since high contrast does not have a great enough exposure latitude to give several shades of grey in the image (Bushong, 2008). This means that having high contrast in an image does not necessarily mean that it is optimal for every radiographic examination. Having low contrast means that better contrast resolution is produced and this gives the viewer the ability to differentiate between a natomical structures that have similar densities. This is why contrast is a very crucial factor in image quality (Oakley, 2003). Scattered radiation affects image contrast and the characteristics of the receptor and display system. The anatomical detail and contrast of small anatomical structures may also be reduced due to image blurring (Carlton Adler, 2006). Noise Noise affects the images contrast resolution and the detail seen in the image. Like audio noise and video noise, radiographic noise is caused by weak signals in areas of the image (Oakley, 2003). The lower the noise the better the contrast resolution and so image quality is better. According to Bushong (2008), there are four main components that affect radiographic noise. These are graininess, structure mottle, quantum mottle and scatter radiation. Graininess and structure mottle cannot be controlled by the radiographer since they are dependent on the image receptor. However, the radiographer can use several techniques and exposure factors to improve image quality and reduce the noise as much as possible depending on the subject under examination. Penetration energy of the x-ray photons (kV) can be increased in subjects that are obese and that are having thick areas of their bodies irradiated. Quantum mottle is also a very important characteristic in defining noise. Bushong (2008) ex plains that quantum mottle depends on the amounts of x-rays that are exposed and absorbed by the image detector. When few x-rays react with the receptor the resultant image will appear mottled. However, when more x-rays are absorbed by the detector, the image will appear smooth. Noise can be calculated by measuring the signal-to noise ratio of the image (Bushong, 2008). If not enough x-ray photons reach the detector, the image is said to be under-exposed, resulting in a low signal-to-noise ratio. However, a high signal-to-noise ratio is achieved if the right radiographic technique is used with the right exposure factors (Bushong, 2008). Spatial Resolution Spatial resolution is a term used in imaging that refers to the resolution of a radiograph. Having a high resolution means that more detail can be seen and detected on the image. Spatial resolution is a very important performance indicator in radiography. Quality control phantoms are used to check the spatial resolution and contrast of an imaging system. Spatial resolution relies on spatial frequency and this quantity could be calculated by seeing the number of line pairs per millimetre (Lp/mm). These line pairs are dark and white lines that are used to assess the resolution of an image. Detail is very important in radiography since outlines of tissues, organs and specific pathologies need to be sharp and detailed. High spatial resolution is also important when assessing for subtle fractures like scaphoid fractures which could easily be missed if the radiograph is not sharp enough (Bushong, 2008). Scatter Radiation When x-ray irradiation encounters matter, some photons pass unimpeded reaching the image receptor whereas other photons are completely absorbed since the energy of the primary x-ray beam is deposited within the atoms comprising the tissue. This absorption interaction of x-ray photons with matter is known as the photoelectric effect (Fauber, et al, 2009). This photoelectric effect is dependent on the matter and its effect decreases rapidly with increasing photon energy (Dendy Heaton 2003). Scatter radiation is made up of photons that are not absorbed but instead lose their energy during interactions with the atoms making up the tissue (Fauber, et al, 2009). This scattering effect is known as the Compton effect (Carlton and Adler, 2006). This happens when the incoming photon interacts with matter and loses energy. This will make the photon change direction and it may leave the anatomic part to interact with the image receptor (Fauber et al, 2009). Scattered low- photon energy reduces the contrast on the final radiograph and is also highly hazardous for patients and staff due to its changed direction and low energy from the primary beam (Dendy Heaton, 2003). Scatter Reduction As explained above, scatter radiation is produced during a Compton interaction in which a primary photon interacts with an atom of the patients body and loses its speed and direction. Scatter is produced mainly in the patient due to the variable attenuation and densities of the different organs in the body and this could be controlled by using anti-scatter techniques and the right exposure factors. Consequently the radiographer should use the adequate technique and exposure factors to reduce the radiation produced within the patients body. Carlton and Adler (2006) argue that when the energy of the primary beam is increased there is a higher chance for the photons to undergo the Compton interaction. This means that the higher the energy given to the photons (kV) the more likely it is that there is Compton interaction with the bodys atoms, therefore creating more scatter radiation and a decline in radiographic contrast (Bushong, 2008; Carlton Adler, 2006). However, Shah, Hassan Newma n (1996) think otherwise. In their study they stressed the effectiveness of anti-scatter techniques on image contrast and concluded that the influence of kV on scatter production is small. The authors further stated that the improvement in contrast that occurs when the kV is lowered is usually due to an increased subject contrast since less scatter reaches the film. Carlton and Adler (2006) also gave importance to the size of the area of body being irradiated. They suggested that by decreasing the area of irradiation as well as applying compression, scatter radiation reaching the detector could be significantly reduced. Using this technique Shah, Hassan and Newman (1996) noted a decrease in the dose area product (DAP) when decreasing the area of irradiation, therefore lowering the risk of increasing patient dose. Anti-scatter techniques Anti-scatter techniques are radiographic techniques that make use of devices or applications such as grids and air gaps so that scatter radiation is absorbed or deviated from reaching the image detector. These anti-scatter techniques help in reducing patient dose as well as improving the quality of the radiographic image. The two main techniques relevant with this study are explained and analysed in the following sub-sections. Grid Technique Grids are used in radiography to protect the image detector from scatter radiation. Scatter radiation degrades the quality of the image and may lead to loss of anatomical detail and information (Sherer et al., 2006). Anti-scatter grids are made up of parallel radio opaque strips with a low-attenuation material interspacing the strips (Sherer et al., 2006). The most commonly used interspaced materials are aluminium and carbon fibre (Court Yamazaki, 2004). The main function of these anti-scatter grids is contrast improvement. According to Carlton Adler (2006), the most effective way to see how well a grid is performing is by measuring the contrast improvement factor. The contrast improvement factor measures the ability of a grid to improve contrast. This factor is affected by the volume of tissue irradiated and by the kV. If the amount of scatter radiation increases, the contrast of the image will be reduced, therefore reducing the contrast improvement factor. This is calculated usin g the following formula: K= Radiographic Contrast with the grid/Radiographic contrast without the grid (Carlton Adler, 2006, pg.263) The higher the contrast improvement factor the higher is the contrast improvement. However, Court and Yamazaki (2004) argue that since contrast can be digitally altered in digital radiography, it is best to calculate the signal-to-noise ratio (SNR) of the image. This is especially useful in cases where there is low object contrast. The interspaced material separating the lead grid lines is also very important in monitoring the functionality of a grid. In the study performed by Court and Yamazaki (2004) it was concluded that aluminium has a higher atomic number than carbon fibre and so it absorbs more low energy scatter radiation. However, aluminium also absorbs some of the primary photons therefore increasing patient dose. Alternatively, carbon fibre absorbs less primary radiation than aluminium (Court Yamazaki, 2004). Grid ratio is also an important factor to consider in improving image quality especially image contrast. The grid ratio is obtained by dividing the height of the strips by the strip separation. As the grid ratio affects the rate of scatter to that reaching the detector it is instrumental in improving image contrast (Dendy Heaton, 2003). There are principally two types of grids, linear grids (parallel grids) and focused linear grids (Fauber, 2009). Both types have their own advantages and disadvantages. Parallel grids are made up of linear lead strips with low-density material interspacing them and are parallel to each other. This variety of grids produces grid cut-off at lateral edges since they do not coincide with the oblique divergences of the beam (Dendy Heaton, 2003; Fauber, 2009). It is also essential that these grids are positioned correctly, perpendicular to the central ray of the primary beam. If this is done incorrectly, there will be grid cut-off and the lead strips will absorb a lot of the primary beam which will show up on the image (Dendy Heaton, 2003). This will result in image deterioration and in the patient receiving an extra dose of radiation when repeating the exposure. The focused grids, however, are designed in such a way that it allows the lead strips to be gradually angled moving away from t he central axis. Although these grids are designed to eliminate the cut-off on the lateral sides, they still have to be used at a specific focus to image distance (FID) depending on the type of grid being used (Dendy Heaton, 2003). Although grids are used to improve image contrast and reduce scatter reaching the detector, this is at the expense of a high radiation dose to the patient. This happens because the mAs has to be increased when using the grid. This is necessary in order to compensate the primary beam photons absorbed by the grid (Carlton Adler, 2006). Air-gap Technique The air gap technique is an alternative technique used to reduce the amount of scatter reaching the detector. By employing an air-gap technique between the patient and the image detector, the energy of the scattered photons decreases especially in the first tens of centimetres due to the large divergence of the beam (Ball Price, 1995). The primary radiation is not affected or reduced, since at this stage the primary beam is almost parallel to the detector (Ball Price, 1995). When the air-gap technique is used, the object to image distance (OID) is increased, which may produce some magnification (Sherer et al., 2006). Anti-scatter techniques are important in reducing low energy radiation reaching the detectors. However, the primary beam should not be deflected or disrupted so that the image acquisition and image quality is not affected (Fauber, 2009).When the grid technique is employed, the grid lines are unable to discriminate between the primary radiation and the scattered radiation and so this could lead to grid cut-off and grid lines may appear on the image (Maynard, 1981). Maynard (1981) argues that with the use of an air-gap the image quality and diagnostic quality of many projections improves. A study by Karoll et al. (1985) analysed the patient dose when the air-gap was employed compared to when the grid was used. In this study the air gap was employed in a digital subtraction examination. Karoll et al. (1985) reported that by using the air gap technique the mA could be lowered without losing spatial resolution. The results of this study were remarkable as the air gap technique allowed 25% to 88% reduction on the mA without increasing the kV or the time of exposure (Karoll et al., 1985). This meant that patient dose was reduced since the mA was lowered and so the patient was irradiated less. Although this study is 25 years old, it is still valid since in direct digital radiography, windowing has given the radiographer the possibility to reduce the exposure factors to a certain limit while still obtaining a good diagnostic image. This means that patient dose could be lowered. Both grid and air-gap technique were studied and compared to assess patient dose by Kottomasu and Kulms (1997). The authors concluded that the air-gap improves musculoskeletal digital imaging without an increase in skin entrance dose. According to Kottomasu and Kulms (1997), this happened since the scattered photons had less energy once diverged by the patient; they were deflected and did not have enough energy to reach the image detector (Kottomasu Kulms, 1997). Barall (2004) also suggested that when employing the air-gap technique the radiographer should apply inverse square law by increasing the SID and applying tighter collimation. This will ensure the highest decrease in patient dose possible (Barall, 2004). The increase in SID could enable a better use of the air gap while reducing magnification by keeping the source to object distant (OID) constant. In relation to the horizontal beam lateral hip projection, there is a reduction in dose and a good diagnostic resultant image wh en compared with the grid technique (Barall, 2004). Trimble (2000) concluded that imaging the thoracic spine without a grid was possible in children and adults of small size. In this study a significant dose reduction was noted and therefore on this basis, imaging the hip laterally using a horizontal beam and applying the air-gap technique instead of the grid may also result in a reduction of patient dose as opposed to using the grid technique. Digital radiography Radiography has been revolutionised and developed throughout the years from screen film (SF) radiography a high quality digital system has evolved (Oakley, 2003). With the introduction of digital imaging systems, image quality characteristics have improved. The process of image formation in DR is similar as in SF. The image is first generated, then processed, archived and presented. Instead of films, DR uses detectors which when exposed to x-ray radiation absorb this irradiated energy which is then transformed into electrical charges, recorded, digitized and configured into different grey scales (Dendy Heaton, 2003). The grey scales presented on the produced image represent the amount of x-ray photons absorbed by the detector. A big advantage in digital radiography is image manipulation post-processing. While viewing the image, the radiographer can zoom in or out, change the greyscale as well as use measuring tools. Another great advantage of DR over SF is that images can be stored safely and archived. This solves the problem of films being lost and enables future reference of the images (Carlton Adler, 2006). There are two types of digital imaging systems: computed radiography (CR) and direct digital radiography (DR). In computed radiography imaging plates containing photostimuable crystals are used, which absorb the x-ray energy and store it temporarily (Kà ¶rner et al., 2007). Processing involves scanning the detective layer pixel by pixel using a high energy laser-beam of a specific wave-length. Since the exposed photon energies are only stored temporarily in the detective layer, the read-out process should start immediately after exposure. This is mainly because the amount of energy stored in these crystals decreases over time. Although this is a big step from screen-film (SF), spatial resolution in CR may decrease if viewing monitors are not of the appropriate resolution (Kà ¶rner et al., 2007). Direct digital radiographic systems use a photoconductor directly converting x-ray photons to electrical charges, once the photons are absorbed. The most common material used as a photoconductor in industry is amorphous selenium. This material has a high intrinsic spatial resolution. However, the material of the detector does not affect the pixel size, matrix and spatial resolution of the detector (Dendy Heaton, 2003). These are affected by recording and read out devices used. Therefore image processing in DR is as important as in SF and CR. In DR image processing is used primarily to improve the image quality by removing technical artefacts, optimising the contrast and reducing the noise (Dendy Heaton, 2003). Radiation Dose The transition from SF to DR has also changed the radiation dose that the patient gets from an x-ray exposure. Radiation dose is the amount of radiation absorbed by the patient due to a radiation exposure (Carlton Adler, 2006). In SF radiography the dynamic range of the receptor (film) is relatively low and so it only detects specific exposures that lie within its parameters. However, in DR the digital receptor can detect a wide range of exposures. This means that a slightly underexposed or overexposed image is acceptable since image quality can be altered using windowing. Therefore in DR the radiation dose could be kept relatively low when compared to SF while still producing a good diagnostic image. This could also work the other way when patients are overexposed to radiation due to the wide dynamic range of the receptors. The ALARA concept is based on the theory that there is no safe dose of radiation using any kind of irradiation or radioactive material (The Ionising Radiation [ Medical Exposure] Regulations, 2000/2007 The Medical Exposure Directive 97/43/Euratom). In this way individuals internal and external exposure to radiation is kept to a minimum. This principle does not only address radiation used in medicine but also social, technical and economic considerations of use of radiation. This principle also takes into consideration the time of exposure of radiation, filtration, and appropriate materials selected to minimise radioactivity depositing on surfaces. This also ensures the safe disposal of materials containing radioactivity such as needles used in nuclear medicine (The Ionising Radiation [Medical Exposure] Regulations, 2000/2007, The Medical Exposure Directive 97/43/Euratom). The use of ionising radiation should be monitored and used carefully to ensure as low a dose exposure as is reasonably achievable to the patient while at the same time producing an image of high diagnostic quality. Relative Literature The latest literature reviewed in relation to this dissertation was that of Flinthman (2006) who assessed thirty-five horizontal beam lateral hip radiographs for image quality. Nineteen of the cases were performed using the air-gap technique whereas sixteen using the grid technique. Several radiologists and radiographers were asked to evaluate the images. It was found that the air-gap was of higher image quality than the grid technique (Flinthman, 2006). In Flinthmans study several persons were asked to evaluate an uneven number of cases that were meant to be compared regarding the technique used to obtain the radiographs. According to Flinthman (2006) it is more important to have a small group of people evaluating the radiographs. This is because the results could be more specific and more reliable (Flinthman, 2006). A limitation of this study is that Flinthman (2006) did not use the same subjects in both techniques to achieve his results and so it is harder to attain valid and conc lusive results that could be applied in a clinical setting. A similar study comparing the grid and air-gap technique was conducted by Persliden and Carlsson (1997). Persliden and Carlsson (1997) studied scatter reduction using the air-gap and the grid technique. This study investigated the effect of the air-gap technique over the imaging plate and demonstrated the positional variation of scattered radiation (Persliden Carlsson, 1997). The authors concluded that by using the air-gap technique, the patient irradiation was lowered. Persliden and Carlsson (1997) argued that even field size and patient thickness greatly affected the use of the air-gap. As well as Persliden and Carlsson (1997), Trimble (2000) looked and assessed image quality of lateral thoracic spine radiographs and chest radiographs. These examinations were both done using the grid technique and the air-gap technique. Trimble (2000) found it important to have a large sample of subjects while keeping the specialists evaluating the images small. Trimbles study resulted in the air-gap being better for high image quality than the grid. Similiar to this study, Gouldings study (2006) who looked at image quality in lateral hip radiography when using both grid and air gap technique. The radiographs were obtained from the accident and emergency department Goulding (2006) worked in, where radiographers performed lateral hip shoot through examinations using their preferred air gap or grid technique. Goulding (2006) took a sample from the recorded examinations of both techniques. The researcher excluded examinations with an exposure of 100 mAs or more as well as any duplicate patient numbers due to re-assessment as well as those examinations that used both air gap and grid technique in the same examination as this signified a very large patient. Goulding (2006) compared the sampled gird and air gap radiographs after reporting radiographers evaluated five areas on each radiograph, chosen by the researcher. The radiographers had to score each area from one to five where one is poor and five is optimum. It resulted that the a ir gap technique had improved image quality more than the grid technique. A limitation of this study was, however, that the patients used to test for both techniques were not the same, and so this could have meant that the results were not totally reliable since patient size and exposure factors were not constant but varied depending on each examination. Conclusion The literature reviewed in this chapter has explored furthermore the roles of the air-gap and grid technique in imaging. It has also analysed the effect of scatter radiation and ways to reduce this in order to improve radiographic image quality while limiting the radiation dose to the patient as much as possible. Several studies were reviewed and analysed and will help to improve this experimental research. Some studies that are similar to this study were reviewed and discussed. In the next chapter, a description of the research design used in this study will be presented.

Friday, October 25, 2019

Police Blunders In The Manson Investigation :: essays research papers

Police Blunders in The Manson Investigation On August 10, 1969 the headline "Actress Is Among 5 Slain at Home in Beverly Hills" appeared on the front page of the New York Times (Roberts). This was the beginning of a investigation of police error which prolonged the arrest of Charles Manson. There were several people who claimed they had heard gunshots and screaming in the early morning hours of August 9. Mrs. Kott, who lived at 10070 Cielo Drive, heard three or four gunshots at what she guessed was to be about twelve thirty to one o'clock a.m. after which she heard nothing. About three quarters of a mile south of the murder scene, Tim Ireland was having an over night party at the camp of which he was a counselor. Everyone had gone to bed when Tim awoke to a man's voice screaming "Oh, God, no, please don't! Oh God, no, don't, don't, don't..." (Bugliosi & Gentry 4). At this time, about twelve forty a.m., he awoke his supervisor, told him about the scream, and requested that he go see if anyone needed help. He drove around the area but saw nothing unusual. Robert Bullington of the Bel Air Patrol was in his parked car when he heard three gunshots spaced a few seconds apart. He immediately called in to headquarters (the call logged in at 4:11 a.m.). Headquarters then called in to LA PD but nothing further was done. About four thirty paperboy Steve Shannon, who hadn't heard anything the previous night, noticed what looked like a telephone wire hanging over the front gate and a bug light on near the house. Mr. Kott also noticed the wire when he went out to get his paper at about seven thirty that morning (Bugliosi & Gentry 4-5). Winifred Chapman, the housekeeper for 10050 Cielo Drive, arrived at the house and also noticed the wire hanging at the gate. She first thought the power was out but then she pushed the button to open the front gate and it did. She began to walk up the driveway when she noticed that there was an unfamiliar automobile in the driveway. She figured, though, that it was only a visitor and continued toward the house. When she entered, she picked up the phone and the line was dead. Thinking she should inform someone, she entered the living room where she noticed two blue trunks which were not there when she left the previous night. A closer look saw that there was blood on the trunks. There was blood scattered about in the living room.

Thursday, October 24, 2019

The Art of War

Sun-Tzu Wu is the reputed author of the Chinese classic Ping-fa (The Art of War), written approximately 475-221 B. C. Penned at a time when China was divided into six or seven states that often resorted to war with each other in their struggles for supremacy, it is a systematic guide to strategy and tactics for rulers and commanders. In doing business on the Internet during this time of rampant computer viruses and hacker attacks it may be wise for us to follow some of his tactical principles in order to insure the safety of ourselves and our future clients. Know your enemy and know yourself; in a hundred battles, you will never be defeated. When you are ignorant of the enemy but know yourself, your chances of winning or losing are equal. If ignorant both of your enemy and of yourself, you are sure to be defeated in every battle. In a chilling article entitled Big Brother is Watching Bob Sullivan of MSNBC recounts a tale during a recent visit to London: Only moments after stepping into the Webshack Internet cafe in London†s Soho neighborhood, â€Å"Mark† asked me what I thought of George W. Bush and Al Gore. â€Å"I wouldn†t want Bush running things,† he said. â€Å"Because he can†t run his Web site.† Then he showed me a variety of ways to hack Bush†s Web sites. That was just the beginning of a far-reaching chat during which the group nearly convinced me Big Brother is in fact here in London. â€Å"I don†t know if he can run the free world,† Mark said. â€Å"He can†t keep the Texas banking system computers secure. So-called â€Å"2600† clubs are a kind of hacker â€Å"boy scout† organization – there are local 2600 chapters all around the globe. It is in this environment, and this mindset, that London†s hackers do their work. They do not analyze computer systems and learn how to break them out of spite, or some childish need to destroy: Mark and friends see themselves as merely accumulating knowledge that could be used in self-defense if necessary. They are the citizen†s militia, the Freedom Fighters of the Information Age, trying to stay one step ahead of technology that could one day be turned against them. Jon-K Adams in his treatise entitled Hacker Ideology (aka Hacking Freedom) states that hackers have been called both techno-revolutionaries and heroes of the computer revolution. Hacking â€Å"has become a cultural icon about decentralized power.† But for all that, hackers are reluctant rebels. They prefer to fight with code than with words. And they would rather appear on the net than at a news conference. Status in the hacker world cannot be granted by the general public: it takes a hacker to know and appreciate a hacker. That's part of the hacker's revolutionary reluctance; the other part is the news media's slant toward sensationalism, such as, â€Å"A cyberspace dragnet snared fugitive hacker.† The public tends to think of hacking as synonymous with computer crime, with breaking into computers and stealing and destroying valuable data. As a result of this tabloid mentality, the hacker attempts to fade into the digital world, where he-and it is almost always he-has a place if not a! In his self-conception, the hacker is not a criminal, but rather a â€Å"person who enjoys exploring the details of programmable systems and how to stretch their capabilities.† Which means that he is not necessarily a computer geek. The hacker defines himself in terms that extend beyond the computer, as an â€Å"expert or enthusiast of any kind. One might be an astronomy hacker† (Jargon File). So in the broadest sense of his self-conception, the hacker hacks knowledge; he wants to know how things work, and the computer-the prototypical programmable system-simply offers more complexity and possibility, and thus more fascination, than most other things. >From this perspective, hacking appears to be a harmless if nerdish enthusiasm. But at the same time, this seemingly innocent enthusiasm is animated by an ideology that leads to a conflict with civil authority. The hacker is motivated by the belief that the search for knowledge is an end in itself and should be unrestricted. But invariably, when a hacker explores programmable systems, he encounters barriers that bureaucracies impose in the name of security. For the hacker, these security measures become arbitrary limits placed on his exploration, or in cases that often lead to confrontation, they become the focus of further explorations: for the hacker, security measures simply represent a more challenging programmable system. As a result, when a hacker explores such systems, he hacks knowledge, but ideologically he hacks the freedom to access knowledge. Political hackers are another group considering themselves modern freedom fighters. â€Å"Hacktivists† have officially moved from nerdish extremists to become the political protest visionaries of the digital age, a meeting at the Institute of Contemporary Arts in London was told on Thursday. Paul Mobbs, an experienced Internet activist and anti-capitalist protestor, will tell attendees that the techniques used by politically minded computer hackers — from jamming corporate networks and sending email viruses to defacing Web sites — has moved into the realm of political campaigning. Mobbs says that the term â€Å"Hacktivism† has been adopted by so many different groups, from peaceful Net campaigners to Internet hate groups, that it is essentially meaningless, but claims that Internet protest is here to stay. â€Å"It has a place, whether people like it or not,† says Mobbs. Steve Mizrach in his 1997 dissertation entitled Is there a Hacker Ethic for 90s Hackers? delves into this subject in great detail. He describes the divergent groups of hackers and explains their modus operandi: I define the computer underground as members of the following six groups. Sometimes I refer to the CU as â€Å"90s hackers† or â€Å"new hackers,† as opposed to old hackers, who are hackers (old sense of the term) from the 60s who subscribed to the original Hacker Ethic.  § Hackers (Crackers, system intruders) – These are people who attempt to penetrate security systems on remote computers. This is the new sense of the term, whereas the old sense of the term simply referred to a person who was capable of creating hacks, or elegant, unusual, and unexpected uses of technology. Typical magazines (both print and online) read by hackers include 2600 and Iron Feather Journal.  § Phreaks (Phone Phreakers, Blue Boxers) – These are people who attempt to use technology to explore and/or control the telephone system. Originally, this involved the use of â€Å"blue boxes† or tone generators, but as the phone company began using digital instead of electro-mechanical switches, the phreaks became more like hackers. Typical magazines read by Phreaks include Phrack, Line Noize, and New Fone Express.  § Virus writers (also, creators of Trojans, worms, logic bombs) – These are people who write code which attempts to a) reproduce itself on other systems without authorization and b) often has a side effect, whether that be to display a message, play a prank, or trash a hard drive. Agents and spiders are essentially ‘benevolent' virii, raising the question of how underground this activity really is. Typical magazines read by Virus writers include 40HEX.  § Pirates – Piracy is sort of a non-technical matter. Originally, it involved breaking copy protection on software, and this activity was called â€Å"cracking.† Nowadays, few software vendors use copy protection, but there are still various minor measures used to prevent the unauthorized duplication of software. Pirates devote themselves to thwarting these things and sharing commercial software freely with their friends. They usually read Pirate Newsletter and Pirate magazine.  § Cypherpunks (cryptoanarchists) – Cypherpunks freely distribute the tools and methods for making use of strong encryption, which is basically unbreakable except by massive supercomputers. Because the NSA and FBI cannot break strong encryption (which is the basis of the PGP or Pretty Good Privacy), programs that employ it are classified as munitions, and distribution of algorithms that make use of it is a felony. Some cryptoanarchists advocate strong encryption as a tool to completely evade the State, by preventing any access whatsoever to financial or personal information. They typically read the Cypherpunks mailing list.  § Anarchists – are committed to distributing illegal (or at least morally suspect) information, including but not limited to data on bombmaking, lockpicking, pornography, drug manufacturing, pirate radio, and cable and satellite TV piracy. In this parlance of the computer underground, anarchists are less likely to advocate the overthrow of government than the simple refusal to obey restrictions on distributing information. They tend to read Cult of the Dead Cow (CDC) and Activist Times Incorporated (ATI).  § Cyberpunk – usually some combination of the above, plus interest in technological self-modification, science fiction of the Neuromancer genre, and interest in hardware hacking and â€Å"street tech.† A youth subculture in its own right, with some overlaps with the â€Å"modern primitive† and â€Å"raver† subcultures. So should we fear these geeky little mischief-makers? The New York Post revealed recently that a busboy allegedly managed to steal millions of dollars from the world†s richest people by stealing their identities and tricking credit agencies and brokerage firms. In his article describing this event Bob Sullivan says, â€Å"Abraham Abdallah, I think, did us all a favor, for he has exposed as a sham the security at the world†s most important financial institutions.† The same two free e-mail addresses were used to request financial transfers for six different wealthy Merrill Lynch clients, according to the Post story. Merrill Lynch didn†t notice? Why would Merrill accept any transfer requests, indeed take any financial communication seriously at all, from a free, obviously unverified anonymous e-mail account? I†m alarmed by the checks and balances that must be in place at big New York brokerage firms. Rather than being a story about a genius who almost got away, this is simply one more story of easy identity theft amid a tidal wave of similar crimes. The Federal Trade Commission has received 40,000 complaints of identity theft since it started keeping track two years ago, but the agency is certain that represents only a fraction of real victims. This is a serious problem, long ignored by the industry. If fact, just last year the credit industry beat back a congressional bill known as The Identity Theft Protection Act, claiming it would be too expensive for them. â€Å"Clearly there has to be more leveling of the playing field. We have to hold banks and credit unions accountable.† Last month the U.S. Federal Bureau of Investigation (FBI) was again warning electronic-commerce Web sites to patch their Windows-based systems to protect their data against hackers. The FBI's National Infrastructure Protection Center (NIPC) has coordinated investigations over the past several months into organized hacker activities targeting e-commerce sites. More than 40 victims in 20 states have been identified in the ongoing investigations, which have included law enforcement agencies outside the United States and private sector officials. The investigations have uncovered several organized hacker groups from Russia, the Ukraine, and elsewhere in Eastern Europe that have penetrated U.S. e-commerce and online banking computer systems by exploiting vulnerabilities in the Windows NT operating system, the statement said. Microsoft has released patches for these vulnerabilities, which can be downloaded from Microsoft's Web site for free. Once the hackers gain access, they download proprietary information, customer databases, and credit card information, according to the FBI. The hackers subsequently contact the company and attempt to extort money by offering to patch the system and by offering to protect the company's systems from exploitation by other hackers. The hackers tell the victim that without their services they cannot guarantee that other hackers will not access their networks and post stolen credit card information and details about the site's security vulnerability on the Internet. If the company does not pay or hire the group for its security services, the threats escalate, the FBI said. Investigators also believe that in some instances the credit card information is being sold to organized crime groups. Defend yourself when you cannot defeat the enemy, and attack the enemy when you can. Scott Culp in a detailed list of security precautions on Microsoft†s Web page suggests that there are ten immutable laws of security. Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore. It's an unfortunate fact of computer science: when a computer program runs, it will do what it's programmed to do, even if it's programmed to be harmful. When you choose to run a program, you are making a decision to turn over control of your computer to it. That's why it's important to never run, or even download, a program from an untrusted source – and by â€Å"source†, I mean the person who wrote it, not the person who gave it to you. Law #2: If a bad guy can alter the operating system on your computer, it's not your computer anymore. In the end, an operating system is just a series of ones and zeroes that, when interpreted by the processor, cause the machine to do certain things. Change the ones and zeroes, and it will do something different. To understand why, consider that operating system files are among the most trusted ones on the computer, and they generally run with system-level privileges. That is, they can do absolutely anything. Among other things, they're trusted to manage user accounts, handle password changes, and enforce the rules governing who can do what on the computer. If a bad guy can change them, the now-untrustworthy files will do his bidding, and there's no limit to what he can do. He can steal passwords, make himself an administrator on the machine, or add entirely new functions to the operating system. To prevent this type of attack, make sure that the system files (and the registry! , for that matter) are well protected. Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore. He could mount the ultimate low-tech denial of service attack, and smash your computer with a sledgehammer.  § He could unplug the computer, haul it out of your building, and hold it for ransom.  § He could boot the computer from a floppy disk, and reformat your hard drive. But wait, you say, I've configured the BIOS on my computer to prompt for a password when I turn the power on. No problem – if he can open the case and get his hands on the system hardware, he could just replace the BIOS chips. (Actually, there are even easier ways).  § He could remove the hard drive from your computer, install it into his computer, and read it.  § He could make a duplicate of your hard drive and take it back his lair. Once there, he'd have all the time in the world to conduct brute-force attacks, such as trying every possible logon password. Programs are available to automate this and, given enough time, it's almost certain that he would succeed. Once that happens, Laws #1 and #2 above apply  § He could replace your keyboard with one that contains a radio transmitter. He could then monitor everything you type, including your password. Always make sure that a computer is physically protected in a way that's consistent with its value – and remember that the value of a machine includes not only the value of the hardware itself, but the value of the data on it, and the value of the access to your network that a bad guy could gain. At a minimum, business-critical machines like domain controllers, database servers, and print/file servers should always be in a locked room that only people charged with administration and maintenance can access. But you may want to consider protecting other machines as well, and potentially using additional protective measures. If you travel with a laptop, it's absolutely critical that you protect it. The same features that make laptops great to travel with – small size, light weight, and so forth – also make them easy to steal. There are a variety of locks and alarms available for laptops, and some models let you remove the hard drive and carry it with you. You also can use features like the Encrypting File System in Windows 2000 to mitigate the damage if someone succeeded in stealing the computer. But the only way you can know with 100% certainty that your data is safe and the hardware hasn't been tampered with is to keep the laptop on your person at all times while traveling. Law #4: If you allow a bad guy to upload programs to your web site, it's not your web site any more. This is basically Law #1 in reverse. In that scenario, the bad guy tricks his victim into downloading a harmful program onto his machine and running it. In this one, the bad guy uploads a harmful program to a machine and runs it himself. Although this scenario is a danger anytime you allow strangers to connect to your machine, web sites are involved in the overwhelming majority of these cases. Many people who operate web sites are too hospitable for their own good, and allow visitors to upload programs to the site and run them. As we've seen above, unpleasant things can happen if a bad guy's program can run on your machine. If you run a web site, you need to limit what visitors can do. You should only allow a program on your site if you wrote it yourself, or if you trust the developer who wrote it. But that may not be enough. If your web site is one of several hosted on a shared server, you need to be extra careful. If a bad guy can compromise one of the other sites on the server, it's possible he could extend his control to the server itself, in which case he could control all of the sites on it – including yours. If you're on a shared server, it's important to find out what the server administrator's policies are. Law #5: Weak passwords trump strong security. The purpose of having a logon process is to establish who you are. Once the operating system knows who you are, it can grant or deny requests for system resources appropriately. If a bad guy learns your password, he can log on as you. In fact, as far as the operating system is concerned, he is you. Whatever you can do on the system, he can do as well, because he's you. Maybe he wants to read sensitive information you've stored on your computer, like your email. Maybe you have more privileges on the network than he does, and being you will let him do things he normally couldn't. Or maybe he just wants to do something malicious and blame it on you. In any case, it's worth protecting your credentials. Always use a password – it's amazing how many accounts have blank passwords. And choose a complex one. Don't use your dog's name, your anniversary date, or the name of the local football team. And don't use the word â€Å"password†! Pick a password that has a mix of upper- and lower-case letters, number, punctuation marks, and so forth. Make it as long as possible. And change it often. Once you've picked a strong password, handle it appropriately. Don't write it down. If you absolutely must write it down, at the very least keep it in a safe or a locked drawer – the first thing a bad guy who's hunting for passwords will do is check for a yellow sticky note on the side of your screen, or in the top desk drawer. Don't tell anyone what your password is. Remember what Ben Franklin said: two people can keep a secret, but only if one of them is dead. Finally, consider using something stronger than passwords to identify yourself to the system. Windows 2000, for instance, supports the use of smart cards, which significantly strengthens the identity checking the system can perform. You may also want to consider biometric products like fingerprint and retina scanners. Law #6: A machine is only as secure as the administrator is trustworthy. Every computer must have an administrator: someone who can install software, configure the operating system, add and manage user accounts, establish security policies, and handle all the other management tasks associated with keeping a computer up and running. By definition, these tasks require that he have control over the machine. This puts the administrator in a position of unequalled power. An untrustworthy administrator can negate every other security measure you've taken. He can change the permissions on the machine, modify the system security policies, install malicious software, add bogus users, or do any of a million other things. He can subvert virtually any protective measure in the operating system, because he controls it. Worst of all, he can cover his tracks. If you have an untrustworthy administrator, you have absolutely no security. When hiring a system administrator, recognize the position of trust that administrators occupy, and only hire people who warrant that trust. Call his references, and ask them about his previous work record, especially with regard to any security incidents at previous employers. If appropriate for your organization, you may also consider taking a step that banks and other security-conscious companies do, and require that your administrators pass a complete background check at hiring time, and at periodic intervals afterward. Whatever criteria you select, apply them across the board. Don't give anyone administrative privileges on your network unless they've been vetted – and this includes temporary employees and contractors, too. Next, take steps to help keep honest people honest. Use sign-in/sign-out sheets to track who's been in the server room. (You do have a server room with a locked door, right? If not, re-read Law #3). Implement a â€Å"two person† rule when installing or upgrading software. Diversify management tasks as much as possible, as a way of minimizing how much power any one administrator has. Also, don't use the Administrator account – instead, give each administrator a separate account with administrative privileges, so you can tell who's doing what. Finally, consider taking steps to make it more difficult for a rogue administrator to cover his tracks. For instance, store audit data on write-only media, or house System A's audit data on System B, and make sure that the two systems have different administrators. The more accountable your administrators are, the less likely you are to have problems. Law #7: Encrypted data is only as secure as the decryption key. Suppose you installed the biggest, strongest, most secure lock in the world on your front door, but you put the key under the front door mat. It wouldn't really matter how strong the lock is, would it? The critical factor would be the poor way the key was protected, because if a burglar could find it, he'd have everything he needed to open the lock. Encrypted data works the same way – no matter how strong the cryptoalgorithm is, the data is only as safe as the key that can decrypt it. Many operating systems and cryptographic software products give you an option to store cryptographic keys on the computer. The advantage is convenience – you don't have to handle the key – but it comes at the cost of security. The keys are usually obfuscated (that is, hidden), and some of the obfuscation methods are quite good. But in the end, no matter how well-hidden the key is, if it's on the machine it can be found. It has to be – after all, the software can find it, so a sufficiently-motivated bad guy could find it, too. Whenever possible, use offline storage for keys. If the key is a word or phrase, memorize it. If not, export it to a floppy disk, make a backup copy, and store the copies in separate, secure locations. Law #8: An out of date virus scanner is only marginally better than no virus scanner at all. Virus scanners work by comparing the data on your computer against a collection of virus â€Å"signatures†. Each signature is characteristic of a particular virus, and when the scanner finds data in a file, email, or elsewhere that matches the signature, it concludes that it's found a virus. However, a virus scanner can only scan for the viruses it knows about. It's vital that you keep your virus scanner's signature file up to date, as new viruses are created every day. The problem actually goes a bit deeper than this, though. Typically, a new virus will do the greatest amount of damage during the early stages of its life, precisely because few people will be able to detect it. Once word gets around that a new virus is on the loose and people update their virus signatures, the spread of the virus falls off drastically. The key is to get ahead of the curve, and have updated signature files on your machine before the virus hits. Virtually every maker of anti-virus software provides a way to get free updated signature files from their web site. In fact, many have â€Å"push† services, in which they'll send notification every time a new signature file is released. Use these services. Also, keep the virus scanner itself – that is, the scanning software – updated as well. Virus writers periodically develop new techniques that require that the scanners change how they do their work. Law #9: Absolute anonymity isn't practical, in real life or on the web. All human interaction involves exchanging data of some kind. If someone weaves enough of that data together, they can identify you. Think about all the information that a person can glean in just a short conversation with you. In one glance, they can gauge your height, weight, and approximate age. Your accent will probably tell them what country you're from, and may even tell them what region of the country. If you talk about anything other than the weather, you'll probably tell them something about your family, your interests, where you live, and what you do for a living. It doesn't take long for someone to collect enough information to figure out who you are. If you crave absolute anonymity, your best bet is to live in a cave and shun all human contact. The same thing is true of the Internet. If you visit a web site, the owner can, if he's sufficiently motivated, find out who you are. After all, the ones and zeroes that make up the web session have be able to find their way to the right place, and that place is your computer. There are a lot of measures you can take to disguise the bits, and the more of them you use, the more thoroughly the bits will be disguised. For instance, you could use network address translation to mask your actual IP address, subscribe to an anonymizing service that launders the bits by relaying them from one end of the ether to the other, use a different ISP account for different purposes, surf certain sites only from public kiosks, and so on. All of these make it more difficult to determine who you are, but none of them make it impossible. Do you know for certain who operates the anonymizing service? Maybe it's the same person who owns the web site you just visited! Or what about that innocuous web ! site you visited yesterday, that offered to mail you a free $10 off coupon? Maybe the owner is willing to share information with other web site owners. If so, the second web site owner may be able to correlate the information from the two sites and determine who you are. Does this mean that privacy on the web is a lost cause? Not at all. What it means is that the best way to protect your privacy on the Internet is the same as the way you protect your privacy in normal life – through your behavior. Read the privacy statements on the web sites you visit, and only do business with ones whose practices you agree with. If you're worried about cookies, disable them. Most importantly, avoid indiscriminate web surfing – recognize that just as most cities have a bad side of town that's best avoided, the Internet does too. But if it's complete and total anonymity you want, better start looking for that cave. The Art of War Sun-Tzu Wu is the reputed author of the Chinese classic Ping-fa (The Art of War), written approximately 475-221 B. C. Penned at a time when China was divided into six or seven states that often resorted to war with each other in their struggles for supremacy, it is a systematic guide to strategy and tactics for rulers and commanders. In doing business on the Internet during this time of rampant computer viruses and hacker attacks it may be wise for us to follow some of his tactical principles in order to insure the safety of ourselves and our future clients. Know your enemy and know yourself; in a hundred battles, you will never be defeated. When you are ignorant of the enemy but know yourself, your chances of winning or losing are equal. If ignorant both of your enemy and of yourself, you are sure to be defeated in every battle. In a chilling article entitled Big Brother is Watching Bob Sullivan of MSNBC recounts a tale during a recent visit to London: Only moments after stepping into the Webshack Internet cafe in London†s Soho neighborhood, â€Å"Mark† asked me what I thought of George W. Bush and Al Gore. â€Å"I wouldn†t want Bush running things,† he said. â€Å"Because he can†t run his Web site.† Then he showed me a variety of ways to hack Bush†s Web sites. That was just the beginning of a far-reaching chat during which the group nearly convinced me Big Brother is in fact here in London. â€Å"I don†t know if he can run the free world,† Mark said. â€Å"He can†t keep the Texas banking system computers secure. So-called â€Å"2600† clubs are a kind of hacker â€Å"boy scout† organization – there are local 2600 chapters all around the globe. It is in this environment, and this mindset, that London†s hackers do their work. They do not analyze computer systems and learn how to break them out of spite, or some childish need to destroy: Mark and friends see themselves as merely accumulating knowledge that could be used in self-defense if necessary. They are the citizen†s militia, the Freedom Fighters of the Information Age, trying to stay one step ahead of technology that could one day be turned against them. Jon-K Adams in his treatise entitled Hacker Ideology (aka Hacking Freedom) states that hackers have been called both techno-revolutionaries and heroes of the computer revolution. Hacking â€Å"has become a cultural icon about decentralized power.† But for all that, hackers are reluctant rebels. They prefer to fight with code than with words. And they would rather appear on the net than at a news conference. Status in the hacker world cannot be granted by the general public: it takes a hacker to know and appreciate a hacker. That's part of the hacker's revolutionary reluctance; the other part is the news media's slant toward sensationalism, such as, â€Å"A cyberspace dragnet snared fugitive hacker.† The public tends to think of hacking as synonymous with computer crime, with breaking into computers and stealing and destroying valuable data. As a result of this tabloid mentality, the hacker attempts to fade into the digital world, where he-and it is almost always he-has a place if not a! In his self-conception, the hacker is not a criminal, but rather a â€Å"person who enjoys exploring the details of programmable systems and how to stretch their capabilities.† Which means that he is not necessarily a computer geek. The hacker defines himself in terms that extend beyond the computer, as an â€Å"expert or enthusiast of any kind. One might be an astronomy hacker† (Jargon File). So in the broadest sense of his self-conception, the hacker hacks knowledge; he wants to know how things work, and the computer-the prototypical programmable system-simply offers more complexity and possibility, and thus more fascination, than most other things. >From this perspective, hacking appears to be a harmless if nerdish enthusiasm. But at the same time, this seemingly innocent enthusiasm is animated by an ideology that leads to a conflict with civil authority. The hacker is motivated by the belief that the search for knowledge is an end in itself and should be unrestricted. But invariably, when a hacker explores programmable systems, he encounters barriers that bureaucracies impose in the name of security. For the hacker, these security measures become arbitrary limits placed on his exploration, or in cases that often lead to confrontation, they become the focus of further explorations: for the hacker, security measures simply represent a more challenging programmable system. As a result, when a hacker explores such systems, he hacks knowledge, but ideologically he hacks the freedom to access knowledge. Political hackers are another group considering themselves modern freedom fighters. â€Å"Hacktivists† have officially moved from nerdish extremists to become the political protest visionaries of the digital age, a meeting at the Institute of Contemporary Arts in London was told on Thursday. Paul Mobbs, an experienced Internet activist and anti-capitalist protestor, will tell attendees that the techniques used by politically minded computer hackers — from jamming corporate networks and sending email viruses to defacing Web sites — has moved into the realm of political campaigning. Mobbs says that the term â€Å"Hacktivism† has been adopted by so many different groups, from peaceful Net campaigners to Internet hate groups, that it is essentially meaningless, but claims that Internet protest is here to stay. â€Å"It has a place, whether people like it or not,† says Mobbs. Steve Mizrach in his 1997 dissertation entitled Is there a Hacker Ethic for 90s Hackers? delves into this subject in great detail. He describes the divergent groups of hackers and explains their modus operandi: I define the computer underground as members of the following six groups. Sometimes I refer to the CU as â€Å"90s hackers† or â€Å"new hackers,† as opposed to old hackers, who are hackers (old sense of the term) from the 60s who subscribed to the original Hacker Ethic.  § Hackers (Crackers, system intruders) – These are people who attempt to penetrate security systems on remote computers. This is the new sense of the term, whereas the old sense of the term simply referred to a person who was capable of creating hacks, or elegant, unusual, and unexpected uses of technology. Typical magazines (both print and online) read by hackers include 2600 and Iron Feather Journal.  § Phreaks (Phone Phreakers, Blue Boxers) – These are people who attempt to use technology to explore and/or control the telephone system. Originally, this involved the use of â€Å"blue boxes† or tone generators, but as the phone company began using digital instead of electro-mechanical switches, the phreaks became more like hackers. Typical magazines read by Phreaks include Phrack, Line Noize, and New Fone Express.  § Virus writers (also, creators of Trojans, worms, logic bombs) – These are people who write code which attempts to a) reproduce itself on other systems without authorization and b) often has a side effect, whether that be to display a message, play a prank, or trash a hard drive. Agents and spiders are essentially ‘benevolent' virii, raising the question of how underground this activity really is. Typical magazines read by Virus writers include 40HEX.  § Pirates – Piracy is sort of a non-technical matter. Originally, it involved breaking copy protection on software, and this activity was called â€Å"cracking.† Nowadays, few software vendors use copy protection, but there are still various minor measures used to prevent the unauthorized duplication of software. Pirates devote themselves to thwarting these things and sharing commercial software freely with their friends. They usually read Pirate Newsletter and Pirate magazine.  § Cypherpunks (cryptoanarchists) – Cypherpunks freely distribute the tools and methods for making use of strong encryption, which is basically unbreakable except by massive supercomputers. Because the NSA and FBI cannot break strong encryption (which is the basis of the PGP or Pretty Good Privacy), programs that employ it are classified as munitions, and distribution of algorithms that make use of it is a felony. Some cryptoanarchists advocate strong encryption as a tool to completely evade the State, by preventing any access whatsoever to financial or personal information. They typically read the Cypherpunks mailing list.  § Anarchists – are committed to distributing illegal (or at least morally suspect) information, including but not limited to data on bombmaking, lockpicking, pornography, drug manufacturing, pirate radio, and cable and satellite TV piracy. In this parlance of the computer underground, anarchists are less likely to advocate the overthrow of government than the simple refusal to obey restrictions on distributing information. They tend to read Cult of the Dead Cow (CDC) and Activist Times Incorporated (ATI).  § Cyberpunk – usually some combination of the above, plus interest in technological self-modification, science fiction of the Neuromancer genre, and interest in hardware hacking and â€Å"street tech.† A youth subculture in its own right, with some overlaps with the â€Å"modern primitive† and â€Å"raver† subcultures. So should we fear these geeky little mischief-makers? The New York Post revealed recently that a busboy allegedly managed to steal millions of dollars from the world†s richest people by stealing their identities and tricking credit agencies and brokerage firms. In his article describing this event Bob Sullivan says, â€Å"Abraham Abdallah, I think, did us all a favor, for he has exposed as a sham the security at the world†s most important financial institutions.† The same two free e-mail addresses were used to request financial transfers for six different wealthy Merrill Lynch clients, according to the Post story. Merrill Lynch didn†t notice? Why would Merrill accept any transfer requests, indeed take any financial communication seriously at all, from a free, obviously unverified anonymous e-mail account? I†m alarmed by the checks and balances that must be in place at big New York brokerage firms. Rather than being a story about a genius who almost got away, this is simply one more story of easy identity theft amid a tidal wave of similar crimes. The Federal Trade Commission has received 40,000 complaints of identity theft since it started keeping track two years ago, but the agency is certain that represents only a fraction of real victims. This is a serious problem, long ignored by the industry. If fact, just last year the credit industry beat back a congressional bill known as The Identity Theft Protection Act, claiming it would be too expensive for them. â€Å"Clearly there has to be more leveling of the playing field. We have to hold banks and credit unions accountable.† Last month the U.S. Federal Bureau of Investigation (FBI) was again warning electronic-commerce Web sites to patch their Windows-based systems to protect their data against hackers. The FBI's National Infrastructure Protection Center (NIPC) has coordinated investigations over the past several months into organized hacker activities targeting e-commerce sites. More than 40 victims in 20 states have been identified in the ongoing investigations, which have included law enforcement agencies outside the United States and private sector officials. The investigations have uncovered several organized hacker groups from Russia, the Ukraine, and elsewhere in Eastern Europe that have penetrated U.S. e-commerce and online banking computer systems by exploiting vulnerabilities in the Windows NT operating system, the statement said. Microsoft has released patches for these vulnerabilities, which can be downloaded from Microsoft's Web site for free. Once the hackers gain access, they download proprietary information, customer databases, and credit card information, according to the FBI. The hackers subsequently contact the company and attempt to extort money by offering to patch the system and by offering to protect the company's systems from exploitation by other hackers. The hackers tell the victim that without their services they cannot guarantee that other hackers will not access their networks and post stolen credit card information and details about the site's security vulnerability on the Internet. If the company does not pay or hire the group for its security services, the threats escalate, the FBI said. Investigators also believe that in some instances the credit card information is being sold to organized crime groups. Defend yourself when you cannot defeat the enemy, and attack the enemy when you can. Scott Culp in a detailed list of security precautions on Microsoft†s Web page suggests that there are ten immutable laws of security. Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore. It's an unfortunate fact of computer science: when a computer program runs, it will do what it's programmed to do, even if it's programmed to be harmful. When you choose to run a program, you are making a decision to turn over control of your computer to it. That's why it's important to never run, or even download, a program from an untrusted source – and by â€Å"source†, I mean the person who wrote it, not the person who gave it to you. Law #2: If a bad guy can alter the operating system on your computer, it's not your computer anymore. In the end, an operating system is just a series of ones and zeroes that, when interpreted by the processor, cause the machine to do certain things. Change the ones and zeroes, and it will do something different. To understand why, consider that operating system files are among the most trusted ones on the computer, and they generally run with system-level privileges. That is, they can do absolutely anything. Among other things, they're trusted to manage user accounts, handle password changes, and enforce the rules governing who can do what on the computer. If a bad guy can change them, the now-untrustworthy files will do his bidding, and there's no limit to what he can do. He can steal passwords, make himself an administrator on the machine, or add entirely new functions to the operating system. To prevent this type of attack, make sure that the system files (and the registry! , for that matter) are well protected. Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore. He could mount the ultimate low-tech denial of service attack, and smash your computer with a sledgehammer.  § He could unplug the computer, haul it out of your building, and hold it for ransom.  § He could boot the computer from a floppy disk, and reformat your hard drive. But wait, you say, I've configured the BIOS on my computer to prompt for a password when I turn the power on. No problem – if he can open the case and get his hands on the system hardware, he could just replace the BIOS chips. (Actually, there are even easier ways).  § He could remove the hard drive from your computer, install it into his computer, and read it.  § He could make a duplicate of your hard drive and take it back his lair. Once there, he'd have all the time in the world to conduct brute-force attacks, such as trying every possible logon password. Programs are available to automate this and, given enough time, it's almost certain that he would succeed. Once that happens, Laws #1 and #2 above apply  § He could replace your keyboard with one that contains a radio transmitter. He could then monitor everything you type, including your password. Always make sure that a computer is physically protected in a way that's consistent with its value – and remember that the value of a machine includes not only the value of the hardware itself, but the value of the data on it, and the value of the access to your network that a bad guy could gain. At a minimum, business-critical machines like domain controllers, database servers, and print/file servers should always be in a locked room that only people charged with administration and maintenance can access. But you may want to consider protecting other machines as well, and potentially using additional protective measures. If you travel with a laptop, it's absolutely critical that you protect it. The same features that make laptops great to travel with – small size, light weight, and so forth – also make them easy to steal. There are a variety of locks and alarms available for laptops, and some models let you remove the hard drive and carry it with you. You also can use features like the Encrypting File System in Windows 2000 to mitigate the damage if someone succeeded in stealing the computer. But the only way you can know with 100% certainty that your data is safe and the hardware hasn't been tampered with is to keep the laptop on your person at all times while traveling. Law #4: If you allow a bad guy to upload programs to your web site, it's not your web site any more. This is basically Law #1 in reverse. In that scenario, the bad guy tricks his victim into downloading a harmful program onto his machine and running it. In this one, the bad guy uploads a harmful program to a machine and runs it himself. Although this scenario is a danger anytime you allow strangers to connect to your machine, web sites are involved in the overwhelming majority of these cases. Many people who operate web sites are too hospitable for their own good, and allow visitors to upload programs to the site and run them. As we've seen above, unpleasant things can happen if a bad guy's program can run on your machine. If you run a web site, you need to limit what visitors can do. You should only allow a program on your site if you wrote it yourself, or if you trust the developer who wrote it. But that may not be enough. If your web site is one of several hosted on a shared server, you need to be extra careful. If a bad guy can compromise one of the other sites on the server, it's possible he could extend his control to the server itself, in which case he could control all of the sites on it – including yours. If you're on a shared server, it's important to find out what the server administrator's policies are. Law #5: Weak passwords trump strong security. The purpose of having a logon process is to establish who you are. Once the operating system knows who you are, it can grant or deny requests for system resources appropriately. If a bad guy learns your password, he can log on as you. In fact, as far as the operating system is concerned, he is you. Whatever you can do on the system, he can do as well, because he's you. Maybe he wants to read sensitive information you've stored on your computer, like your email. Maybe you have more privileges on the network than he does, and being you will let him do things he normally couldn't. Or maybe he just wants to do something malicious and blame it on you. In any case, it's worth protecting your credentials. Always use a password – it's amazing how many accounts have blank passwords. And choose a complex one. Don't use your dog's name, your anniversary date, or the name of the local football team. And don't use the word â€Å"password†! Pick a password that has a mix of upper- and lower-case letters, number, punctuation marks, and so forth. Make it as long as possible. And change it often. Once you've picked a strong password, handle it appropriately. Don't write it down. If you absolutely must write it down, at the very least keep it in a safe or a locked drawer – the first thing a bad guy who's hunting for passwords will do is check for a yellow sticky note on the side of your screen, or in the top desk drawer. Don't tell anyone what your password is. Remember what Ben Franklin said: two people can keep a secret, but only if one of them is dead. Finally, consider using something stronger than passwords to identify yourself to the system. Windows 2000, for instance, supports the use of smart cards, which significantly strengthens the identity checking the system can perform. You may also want to consider biometric products like fingerprint and retina scanners. Law #6: A machine is only as secure as the administrator is trustworthy. Every computer must have an administrator: someone who can install software, configure the operating system, add and manage user accounts, establish security policies, and handle all the other management tasks associated with keeping a computer up and running. By definition, these tasks require that he have control over the machine. This puts the administrator in a position of unequalled power. An untrustworthy administrator can negate every other security measure you've taken. He can change the permissions on the machine, modify the system security policies, install malicious software, add bogus users, or do any of a million other things. He can subvert virtually any protective measure in the operating system, because he controls it. Worst of all, he can cover his tracks. If you have an untrustworthy administrator, you have absolutely no security. When hiring a system administrator, recognize the position of trust that administrators occupy, and only hire people who warrant that trust. Call his references, and ask them about his previous work record, especially with regard to any security incidents at previous employers. If appropriate for your organization, you may also consider taking a step that banks and other security-conscious companies do, and require that your administrators pass a complete background check at hiring time, and at periodic intervals afterward. Whatever criteria you select, apply them across the board. Don't give anyone administrative privileges on your network unless they've been vetted – and this includes temporary employees and contractors, too. Next, take steps to help keep honest people honest. Use sign-in/sign-out sheets to track who's been in the server room. (You do have a server room with a locked door, right? If not, re-read Law #3). Implement a â€Å"two person† rule when installing or upgrading software. Diversify management tasks as much as possible, as a way of minimizing how much power any one administrator has. Also, don't use the Administrator account – instead, give each administrator a separate account with administrative privileges, so you can tell who's doing what. Finally, consider taking steps to make it more difficult for a rogue administrator to cover his tracks. For instance, store audit data on write-only media, or house System A's audit data on System B, and make sure that the two systems have different administrators. The more accountable your administrators are, the less likely you are to have problems. Law #7: Encrypted data is only as secure as the decryption key. Suppose you installed the biggest, strongest, most secure lock in the world on your front door, but you put the key under the front door mat. It wouldn't really matter how strong the lock is, would it? The critical factor would be the poor way the key was protected, because if a burglar could find it, he'd have everything he needed to open the lock. Encrypted data works the same way – no matter how strong the cryptoalgorithm is, the data is only as safe as the key that can decrypt it. Many operating systems and cryptographic software products give you an option to store cryptographic keys on the computer. The advantage is convenience – you don't have to handle the key – but it comes at the cost of security. The keys are usually obfuscated (that is, hidden), and some of the obfuscation methods are quite good. But in the end, no matter how well-hidden the key is, if it's on the machine it can be found. It has to be – after all, the software can find it, so a sufficiently-motivated bad guy could find it, too. Whenever possible, use offline storage for keys. If the key is a word or phrase, memorize it. If not, export it to a floppy disk, make a backup copy, and store the copies in separate, secure locations. Law #8: An out of date virus scanner is only marginally better than no virus scanner at all. Virus scanners work by comparing the data on your computer against a collection of virus â€Å"signatures†. Each signature is characteristic of a particular virus, and when the scanner finds data in a file, email, or elsewhere that matches the signature, it concludes that it's found a virus. However, a virus scanner can only scan for the viruses it knows about. It's vital that you keep your virus scanner's signature file up to date, as new viruses are created every day. The problem actually goes a bit deeper than this, though. Typically, a new virus will do the greatest amount of damage during the early stages of its life, precisely because few people will be able to detect it. Once word gets around that a new virus is on the loose and people update their virus signatures, the spread of the virus falls off drastically. The key is to get ahead of the curve, and have updated signature files on your machine before the virus hits. Virtually every maker of anti-virus software provides a way to get free updated signature files from their web site. In fact, many have â€Å"push† services, in which they'll send notification every time a new signature file is released. Use these services. Also, keep the virus scanner itself – that is, the scanning software – updated as well. Virus writers periodically develop new techniques that require that the scanners change how they do their work. Law #9: Absolute anonymity isn't practical, in real life or on the web. All human interaction involves exchanging data of some kind. If someone weaves enough of that data together, they can identify you. Think about all the information that a person can glean in just a short conversation with you. In one glance, they can gauge your height, weight, and approximate age. Your accent will probably tell them what country you're from, and may even tell them what region of the country. If you talk about anything other than the weather, you'll probably tell them something about your family, your interests, where you live, and what you do for a living. It doesn't take long for someone to collect enough information to figure out who you are. If you crave absolute anonymity, your best bet is to live in a cave and shun all human contact. The same thing is true of the Internet. If you visit a web site, the owner can, if he's sufficiently motivated, find out who you are. After all, the ones and zeroes that make up the web session have be able to find their way to the right place, and that place is your computer. There are a lot of measures you can take to disguise the bits, and the more of them you use, the more thoroughly the bits will be disguised. For instance, you could use network address translation to mask your actual IP address, subscribe to an anonymizing service that launders the bits by relaying them from one end of the ether to the other, use a different ISP account for different purposes, surf certain sites only from public kiosks, and so on. All of these make it more difficult to determine who you are, but none of them make it impossible. Do you know for certain who operates the anonymizing service? Maybe it's the same person who owns the web site you just visited! Or what about that innocuous web ! site you visited yesterday, that offered to mail you a free $10 off coupon? Maybe the owner is willing to share information with other web site owners. If so, the second web site owner may be able to correlate the information from the two sites and determine who you are. Does this mean that privacy on the web is a lost cause? Not at all. What it means is that the best way to protect your privacy on the Internet is the same as the way you protect your privacy in normal life – through your behavior. Read the privacy statements on the web sites you visit, and only do business with ones whose practices you agree with. If you're worried about cookies, disable them. Most importantly, avoid indiscriminate web surfing – recognize that just as most cities have a bad side of town that's best avoided, the Internet does too. But if it's complete and total anonymity you want, better start looking for that cave.

Wednesday, October 23, 2019

The Contribution of Processual and Emergent Perspectives to Strategic Change

Change is ubiquitous. Organisational change has become synonymous with managerial effectiveness since the 1980s (Burnes, 1996; Wilson, 1992). However, north American influence over the quest for commitment, efficiency and improved performance, appears to have fallen back upon largely Tayloristic notions of management, with the result that organisational change is widely perceived to be controllable by modern management, with organisations themselves instrumental in their in their hands (Collins, 1997). However, this ‘scientific' approach appears to have diffused with scant regard to contextual variables that may serve to modify and constrain contemporary managerial rhetoric for change (Hatch, 1997). One perspective that attempts to refocus the debate on wider issues has come to be known as the processual or emergent approach to organisational change (Collins, 1997), and it is this perspective that this paper seeks to evaluate You can read also Waves First, the inevitability of change is briefly considered as the time frame selected for organisational analysis tends to dictate the substance of investigation. This leads into a critique of planned change under the umbrella of strategic choice, with its core assumptions based upon managerial hegemony. This approach is then contrasted with the processual and emergent perspectives that seek to widen management appreciation to include factors beyond the organisation and its immediate environments. The implications of the apparent divergence between theory and practice are briefly outlined before concluding that the subjectivist paradigm of the processual/emergent approach is best seen as a modification to theories of strategic choice, which may add to effective managerial practice in the future. This argument is qualified by the need to support such a modification by a fundamental change in modern managerial education. The Inevitability of Change ‘Change' exudes temporality. While it may be a truism that in any field of activity, all periods may be characterised by change and continuity, the time frame selected will tend to highlight change or continuity (Blyton and Turnbull, 1998). For example, a focus upon organisational change during the last two-decades may reveal a period of rapid change. However, a perspective encompassing the last two hundred years may indicate a basic continuity in the capitalist social mode of production (ibid). Consequently, differentiating between whether organisational change should be analysed from the perspective of a strict chronology of ‘clock' or linear time, with its associated notions of relentless progress, planning and implementation, or whether changed is viewed from the perspective of a processual analysis over tracts of time, has given rise to a vigorous debate on how change should be understood as it applies to complex business organisations (Wilson, 1992). Two paradigms dominate the analysis of organisational change. On the one hand, a positivist view holds that change is objectively measurable, and thus controllable, embracing notions of rationality, temporal linearity and sequence – change is an outcome of deliberate action by change agents (Hatch, 1997; Kepner and Tregoe, 1986). On the other hand, a subjectivist view holds that change is dependent upon the temporal context of the wider social system in which it occurs and is thus a social construction – while organisations define and attempt to manage their change processes, outcomes are not necessarily the result of the top-down cascade advocated by the planned approach (Pettigrew, 1985). Consequently, as a point of departure, planned organisational change shall be discussed before moving on to examine the emergent approach as a challenge to the rational model. The Planned Perspective Contemporary US and UK managerial ideology may be identified as an outcome of, and a contributor to, neo-liberalist voluntarism (Dunlop, 1993). This ideology is mobilised through the agency of management to protect capital's interests above all others. Consequently, management and managers come to be considered a social elite through their exercise of ‘god-like' control over a logical and rational process of adaptation, change and ever-improving performance. The organisation is thus instrumental in the hands of management (Collins, 1997; Daft, 1998; Hatch, 1997; Kepner and Tregow, 1986). Generally referred to as ‘strategic choice', the planned approach, according to Wilson (1992:22) is constructed upon the following theories of organisation: 1 Organisational Development (OD) and Behavioural Modification (BM); 2 Planned incrementalism; 3 The ‘enterprise culture', best practice and ‘gurus' as change agents. These perspectives have all in common the role of human agency, whereby, ‘†¦human decisions make an important difference†¦ a voluntarism in which human courage and determination count' (Gouldner 1980, cited in Wilson, 1992:25). OD and BM (closed system) approaches emanate from the field of psychology, positing that organisational change is implemented by management through changing the behaviour of individuals. OD aims to foster consensus and participation on the basis that management attributes resistance to change to poor interpersonal relations (Wilson, 1992). BM is a systematic approach to the conditioning of managerially defined ‘appropriate' behaviour, based upon Skinnerian psychological theories of learning (reward and punishment) and motivation (ibid). Both approaches are based on the assumptions that managers are capable of identifying internal barriers to change, determining appropriate behaviours, and designing and implementing programmes to achieve desired outcomes. Consequently, there is a plethora of ‘frameworks', ‘recipes' and ‘how to' packages aimed at managerial audiences (Collins, 1997) A central feature of many of these packages is Lewin's (1951) ‘force field' framework, which proposes that change is characterised as a state of imbalance between pressures for change and pressures against change. It is suggested that managers are capable of adjusting the equilibrium state of zero-change, by selectively removing or modifying specific forces in the required direction (Senior, 1997). Implicit is the normative nature of planned change: managers should know the various forces as they apply to their own particular situation, and should understand and possess the means to exert influence over them. It follows that, ceteris parebus, without deliberate managerial action, change, at worst is unlikely to occur and, at best, is unlikely to realise desired outcomes without the intervention of chance (Collins, 1997). Planned incrementalism argues that change is constant and evolutionary and should be planned in small steps based on an orderly adjustment to information flowing in from the operating environment (Quinn 1980, cited in Senior, 1997). This approach is related to contingency theory. The argument runs that the most effective way to organise is contingent upon conditions of complexity and change in the environment. Thus, the organisation should achieve congruence with its market environment and managers should support their strategies with appropriate structures and processes to enhance the likelihood of success (ibid). Turning to the final ‘ingredients', Wilson (1992:37) argues that ‘enterprise culture', ‘best practice' and ‘management gurus' are different faces of the same ideology. Enterprise culture denotes best practice and grows from a particular interpretation of management theory. This interpretation shapes the role of external consultants and thus determines who are the gurus; the ideology becomes self-supporting. Thus the ideology of strategic choice is mobilised in support of managerial ideology: to be successful in a free market system (entrepreneurial), firms should be modelled by managers upon best practice (currently, from the US and Japan), should adopt flexible specialisation and decentralised structures, and should seek to create organisational cultures congruent with managers' own. The ‘successful' manager comes to be defined as a ‘change master' (Kanter, 1993; see Peters and Waterman, 1982). The Emergent, Processual Perspective A common critique of the planned perspective is that the ability of management to rationally plan and implement organisational change ignores the influence of wider, more deterministic forces outside the realms of strategic choice (Wilson, 1992). Largely in opposition to this perspective and generally referred to as ‘systemic conflict', the emergent approach, according to Wilson (ibid:22) is constructed upon the following theories of organisation: 1 Contextualism; 2 Population ecology; 3 Life cycles; 4 Power and politics; 5 Social action. While also tending to acknowledge the role of human agency in effecting change, these approaches serve to widen the debate to include the impact of human interaction at micro and macro levels, thus constraining strategic choice (ibid). Contextualism is based upon an open systems (OS) model which views any organisation as being an interdependent component of a much larger whole (Pettigrew, 1985). Serving as a direct intellectual challenge to closed system perspectives, fundamental is the notion that no organisation exists in a vacuum. Emery and Trist (1960, cited in Wilson, 1992) argue that OS reveals the following characteristics: Equifinality – no one best way of achieving the same outcomes; Negative entropy – importing operating environment resources to curtail or reverse natural decay; Steady state – relationship stability between inputs, throughputs, outputs; Cycles and patterns – cash flows, stock-turns and so on. Thus, OS enables the variances between organisations' performances to be explained by external influences, facilitating comparative analysis, the establishment of sectoral norms and the identification of ‘supra-normal' practices (Wilson, 1992). Population ecology (and perhaps institutional theories) is based upon the Darwinian notion of ‘survival of the fittest' (Hatch, 1997). Thus strategic change is aimed at maximising ‘fitness' within the general population of organisations, through the identification of ‘market' niches and strategies of specialisation, differentiation or generalism (Porter, 1980, 1985). Competitive advantage is thus created and sustained through the construction of distinctive and inimitable structures, processes and cultures, eg: erecting high barriers to entry through technological investment, or eliminating threats of product substitution through high R & D investment and thus (desired) innovation (ibid). The life cycle perspective explicitly recognises the temporal nature of organisational change. Though linear in nature (all life cycle theories assume birth, growth, maturity, decline and death as givens), this approach provides insights into the potential internal and external conditions (and constraints) that an organisation is likely to encounter during distinct life cycle phases (Greiner, 1972 cited in Senior, 1997). However, this approach suffers from a similar critique to those levied at models of planned change. ‘Cycles' are not in fact cycles (suggesting reincarnation). Development is linear and progressive and an organisation's location on the ‘cycle' is highly subjective. Perhaps the major contribution of the emergent approach to organisational change, is the highlighting of the role of power and politics in moderating managerial efforts to effect fundamental and sustainable change (Handy, 1986). Essentially, three political models of power reveal that outcomes are incapable of being considered independently of processes and personal stakes. First, overt power is the visible manifestation of localised influence over preferred processes and outcomes (eg: ‘it's the way we've always done things around here'). Second, covert power is less visible and related to the extent of information sharing and participation in change processes afforded by organisational sub-groups (eg: senior management) to others – the phrase ‘inner circle' is a common indicator of covert power relations in operation. Finally, third, contextual power suggests that outcomes are mediated by societal forces and the economic structure of society itself (eg: elites, notions of social justice, and so on) (Burrell and Morgan, 1979). Postmodern analysis reveals the influence of discourse, symbol and myth as interchangeable between organisations and societies in the endorsement of preferred solutions. Thus, contextual power may be utilised to shape the wider justification and acceptability for organisational change( eg: ‘restructuring' for labour stripping; ‘reingeering' for work intensification; ‘partnership' for collective labour coercion; ‘TQM' for zero-tolerance and panoptican managerial control). Moreover, the contextual power perspective also reveals the hegemony of accounting ideology in neo-liberal systems (itself positivist, reductionist and inextricably linked to Taylorism). Thus serving to expose the influence of elite groups, notably silent under the strategic choice framework (Wilson, 1992). Finally, social action theories depict organisational culture (OC) as the structure of social action (ibid). The strategic framework choice would hold that OC is a possession of the organisation and is thus capable of manipulation . In contrast, the systemic conflict framework depicts OC is something an organisation is (a contrasting ontological position) and is therefore largely beyond managerial influence (Legge, 1995). Nevertheless, ‘strong' (integrated) notions of OC are eulogised by the so-called gurus (see Kanter, 1993; Peters and Waterman, 1982), despite receiving severe criticism for their weak methodological foundations (See Guest, 1992). The emergent approach appears to be at odds with the strong culture = high performance proposition at the heart of most change programmes; its causality is unclear. Implications As the above discussion illustrates, the management of change appears to hold sway over the analysis of change (Wilson, 1992). This implies that understanding has been exchanged for expediency. Put differently, managing change is both a learnable and teachable skill. In view of the short-termism inherent in the US and UK economies, with their shareholder emphasis on maximum financial returns and minimal financial risk (itself a contradiction with the notion of ‘entrepreneur'), it is hardly surprising that ‘recipes for success' are so eagerly sought after by under pressure managers and eagerly supplied by management gurus with pound-signs in their eyes. Practice appears to be on a divergent path from theory (Collins, 1997). Collins (ibid) attributes this apparent divergence to managerial education, which itself (as must any educative process) be viewed as a perpetuation of ideology. With respect to organisational change, management education serves to promote the aggrandisement of managers as †Canute-like rulers of the waves'. Epitomised by the MBA (Master of Bugger All?) with its roots in north America, such programmes are themselves reductionist and short-term in nature. Thus, students are precluded by time constraints from exposure to the theoretical foundations of change and, consequently, may be discouraged from challenging received wisdom. This is not to assert that ‘hands on' skills are unimportant, rather to expose that they lose potency in the absence of the appreciation of the wider context which MBA ‘babble', among a wider range of programmes, serves to suffuse. Conclusion – a rejection of Positivism? The investigation of organisational change has not escape the inexorable north American ‘shift' towards hypothetico-deductive perspectives of economics and psychology, with their positivist paradigms focused upon atomisation akin to the natural sciences (Cappelli, 1995). From a temporal perspective, while organisational change is viewed as inevitable in much the same way as in nature, the time frame selected for analysis tends to dictate the scope and degree of change to be investigated. Short-termism, it appears, is a form of temporal reductionism in the search for objective truth, that is a key factor behind the notion that managers can be trained to manage change through sets of skills that imply mastery over the ‘natural' world and therefore, time itself. In this view, planned models of change, rooted in classical theories of management, may be accused of being an ideological construct of assumed legitimacy and authenticity. On the other hand, a subjectivist systemic tension approach, rejects reductionist ‘tool kits' and lays claim to the inclusion of contextual variables at work throughout an organisation, its operating environment and beyond. In this view, while change is clearly not beyond managerial influence, its management is reliant upon wider understanding of the interplay of these variables, of which power relations may be prominent, in order to be able to predict the likely outcomes of managerial actions. However, for something to exist it must be capable of theoretical explanation. That practitioners have opted for voluntarist models of strategic change is not surprising given the elitist ideology of modern management: to control is to manage; short-termism equates to reduced risk and increased control; the institutions of Western corporate governance and finance thus have their goals met by such an approach. Yet, this is to obfuscate the quintessential qualities of the processual, emergent contribution to organisational change. While not refuting planned change, it perhaps serves to modify it – for any change to be understood, explained and sustained, the duality of voluntarism and determinism must be acknowledged and incorporated into the managerial knowledge base. The emergent approach exposes the potential folly of the extremes of positivism as applied to organisations as social entities, thus throwing open the debate to multi-disciplinary perspectives and enriching the field or organisational change. To be of value, such enrichment must be reflected in managerial education itself.