.

Wednesday, July 31, 2019

BDO Cash Management Essay

BDO Cash Management Solutions provide a diverse range of financial solutions that can be customized to meet even your most demanding and complex financial needs. With BDO, you have cash management solutions you never thought possible. The BDO services at your disposal: Payables Solutions Integrated Disbursement Solutions – Provides greater efficiency by consolidating all payment transactions Payroll – Maximizes the convenience of paying employees’ salaries, benefits and incentives Check Disburse – Streamlines and customizes your check printing and preparation Government Payments – Gives you convenience in paying various government agencies using BDO’s Online Banking Service Receivables Solutions Auto-Debit Arrangement – Simplifies and ensures on-time collection of receivables Bills Payment Facility – Efficiently facilitates collection and consolidation of payments from your clients through BDO’s various channels Cash & Check Deposit Pick Up – Provides peace of mind by securing your collection by utilizing armored cars or authorized couriers Post Dated Checks Warehousing – Offers safe and automated management of future-dated check collections Point Of Sale Terminals – Cost effective and efficient alternative to cash-based collections At BDO, we know you have banking needs that can be vastly different from others’ and minutely specific to your operations. This is the compelling reason why we have put together a diverse range of banking products and services to provide the best solution possible for your specific banking need. Currently, BDO’s Cash Management Solutions offers 18 products to serve the cash management needs of over 7,000 clients. The effectiveness of CMS can be seen in its average annual growth of 20% and P130 million average annual growth. Our track record has also gained the recognition and respect of foreign banks as many have made BDO their preferred partner to service their clients in the Philippines. Dedicated support groups, including the Operations team, the Implementation team and IT group work together to provide seamless service solutions you probably never thought were possible. — Copyright @ https://www.bdo.com.ph/business/cash-management

Tuesday, July 30, 2019

Skidelsky Warwick Lecture

In my third and fourth lectures examine the monetary and fiscal confusion which as reigned in the last five years -the experiments with ‘unorthodox monetary policy' and the austerity drive in fiscal policy -as policy makers sought a path to recovery. In my fifth lecture 1 kick at the causes Of the crisis from the standpoint of the world monetary system. Finally, I ask the question: what should post-crash economics be like? What guidance should economics offer the policy-maker to prevent further calamities of the kind we have just experienced?What should students of economics be taught? In this lecture I will consider only those bits of pre-crash orthodoxy relevant to policy making, tit main emphasis being on UK developments. Theories of expectation formation played an overwhelming parting shaping the theory of macroeconomic policy; with changes in the way economists modeled expectations marking the different phases of theory. I will treat these in roughly chronological order, s tarting with the Keynesian theory. II.UNCERTAIN EXPECTATIONS Keynesian macro theory dominated policy from roughly 1945-1975. The minimum doctrine -not in Keynes, but in accepted versions of Keynesian theory -to justify policy intervention to stabilize economies is: SLIDE 1 1. Uncertain expectations, particularly important for investment, leaving investment to depend on ‘conventions' and ‘animal spirits'. 2. Relative interest-inelastic of investment. 3. A) sticky nominal wages (unexplained) and b) sticky nominal interest rates (explained by liquidity preference).The first point suggested investment was subject to severe fluctuations; the last suggested there was a lack or weakness of spontaneous recovery mechanisms- ii the possibility of ;under-employment equilibrium'. This led to a prescription for macro-policy: to prevent or minimize fluctuations of investment demand. Point 2 in combination with b suggested primacy of fiscal over monetary policy for stabilization. SLIDE 2 ‘For Keynes, it was the tendency for the private sector, from time to time, to want to stop spending and to accumulate financial assets instead that lay behind the problems of slumps and unemployment.It could be checked by deficit spending'. (C J. Also and D. Makes (1985), in D. Morris (De. ) ‘The Economic System in the UK†, 374) ‘In the standard Keynesian economic model, when the economy is at less than full capacity, output is determined by demand; and the management of economic activity and hence employment is effected by managing demand'. (ibid, 370) Mention in passing, that there was a theoretical and social radicalism in Keynes obliterated in the standard postwar Keynesian model.For example, he thought insufficient demand was chronic and would get worse; and that, in consequence, the longer term survival of a free enterprise system depended on the redistribution of wealth and income and the reduction in hours of work. I will return to these points in my last lecture. Demand- management The government used fiscal policy (variations in taxes and spending) to maintain full employment, while keeping short term interest rates close to some ‘normal' (or expected) level. Eel. Monetary policy was largely bypassed as a tool of demand-management.The government forecast real GAP for the following year by forecasting year on movement of its expenditure components: consumption, fixed capital formation, stock building spending, and net exports. Budget deficits then adjusted to maintain full employment. There was no explicit modeling of expectations, though attention was paid to the issue of ‘confidence'. The prevalent view was that the confidence of the cuisines community was best maintained by a commitment to full employment. It was different with the balance of payments.With sterling convertible into foreign currencies at a fixed exchange rate, governments also needed to retain confidence of non-resident holders of sterling, so the two requirements of confidence might pull in different directions. ‘Stop-Go' was the result. Stop-Go not withstanding, fiscal activism proved highly successful, aided by the long post-war boom. The budget remained in surplus with current account revenues exceeding expenditure and with borrowing mostly stricter to finance public investment not covered by current-account surpluses.Chancellors from Crisps to Macmillan were even tempted to extend this-above-the-line surplus to an overall surplus by covering capital expenditure below the line from revenue yet this was not achieved 1 . Nonetheless, the public-sector borrowing requirement (ESP.) fell from an average of 7. 5% of GAP (1952-1959) to 6. 6% of GAP (1960-1969). The national debt-to-income ratio fell from 3:1 in 1950 to 0. 7:1 in 19702. Unemployment was consistently below 2. 5% and inflation was low. Ill. THE RISE AND FALL OF PHILLIPS CURVE KEYNESIAN The post-war problem turned out to be not unemployment but inflation. With full capacity utilization, whether generated by Keynesian policy or by benign world conditions, there was always going to be pressure on prices. So the attention of Keynesian policymakers was increasingly turned to fighting inflation, using both fiscal and monetary tools. In this they were also successful for a time. But from the late asses, inflation started to creep up; and the unemployment cost of restraining it started to rise: we enter the era of ‘stagflation'. The underlying theoretical question was: what caused inflation? Was it excess demand or ‘cost-push'?There was no single Keynesian answer to this question. Some Keynesian economists argued that labor market was like any other, with price being determined by the balance between supply and demand. A reduction in the demand for labor would lower its price. Deflation would slow the rise of nominal wages, and hence a rise in the general price level. The question of course was how much deflation would be needed for stable prices? This was not an easy case for Keynesian to argue. Given their belief in sticky nominal wages, the unemployment cost might prove very high.Most Keynesian economists were more comfortable with the ‘cost push' theory of inflation: unions pushing up wages ahead of productivity. Prices rose because business managements raised them; managements raised prices because their costs had risen; costs rose owing to pay increases; and pay increased because otherwise unions would come out on strike. Higher unemployment would not stop them because most of the unemployed could not do the strikers' jobs. In fact, cost-push could occur at levels well below full employment.Short of bringing back mass unemployment, deflating demand would not stop inflation. What was required was a compact with the unions to restrain pay push: incomes policies. Anti-inflation policy in the 1 sass and asses wobbled between fiscal and monetary measures to restrain demand and attempts to reach pay deals with the unions. The Keynesian were rescued from this dilemma by the econometric work Of A. W. Phillips. In 1 958, A. W. Phillips published a famous article which claimed to demonstrate a well-determined relationship between the unemployment rate and the rate of wage increases.The Phillips Curve implied that there was a stable trade-off between unemployment and inflation. The prize was price stability with a small increase in unemployment, way short of the depression. More generally, policy-makers were supposed to have a ‘menu of choice' between different rates of inflation and unemployment. SLIDE 3. ORIGINAL PHILLIPS CURVE The Keynesian policy of demand-management unraveled with the attack on the Phillips Curve by Milton Friedman of Chicago University. In a single lecture in 1 968, he demolished Phillips Curve Keynesian and started the monetarist counter-revolution.Adaptive Expectations Friedman restated the pre-Keynesian idea that there was a unique equilibrium rate of unemployment which he called the ‘natural rate'. Inflation was caused by government attempts to reduce unemployment below the natural rate by increasing the amount of money in the economy. Friedman accepted that there was a trade-off between inflation and unemployment, but that it was temporary, and existed only because workers were fooled into accepting lower real wages than they wanted by not taking into account the rise in prices.But if government repeatedly resorted to monetary expansion (for example by running budget deficits) in order to educe unemployment below its ‘natural' rate, this ‘money illusion' would disappear and workers would put in increased wage demands to match the now expected rise in prices. In short, after a time workers developed inflationary expectations: they built the expected inflation into their wage bargaining. One could not use the Phillips Curve to control inflation in the long run since the Curve itself shifted due to the level of inflation rising. SLIDE 4.FRIEDMAN'S EXPECTATIONS AUGMENTED PHILLIPS CURVE SLIDE 5. One simple version of adaptive expectations is stated in the following equation, where pee is the next year's rate of inflation that is currently expected; p-Eel is this year's rate of inflation that was expected last year; and p is this year's actual rate of inflation: where is between O and 1. This says that current expectations of future inflation reflect past expectations and an â€Å"error-adjustment† term, in which current expectations are raised (or lowered) according to the gap between actual inflation and previous expectations.This error-adjustment is also called â€Å"partial adjustment. † Friedman's work had huge anti-Keynesian policy implications. The five main Ones Were: First, macro-policy can influence nominal, but not real variables: the price level, not the employment or output level. Second, Friedman re-stated the Quantity Theory of Money, the theory that prices (or no minal incomes) change proportionally with the quantity of money. Conversely, fiscal ‘fine tuning' operates with ‘long and variable lags': it is liable to land the economy in the wrong place at the wrong time.Consequently, such stabilization as was needed is much better done by monetary policy than fiscal policy. It lies within the power of the central bank, but not the Treasury, to keep nominal income stable. Provided the government kept money supply growing in line with productivity there would be no inflation, and economies would normally be at their ‘natural rate' of unemployment. Third, Friedman argued that ‘inflation was always and only a monetary phenomenon'.It was the total money supply in the economy which determined the general price level; cost pressures were not independent sources of inflation; they had to be validated by an accommodating monetary policy to be able to get away with a mark-up based price determination strategy; Fourth, Friedman's p ermanent income hypothesis -dating from the early 9505 -suggested that it is households' average long-run income (permanent income) that is likely to determine total demand for consumer spending, rather than fluctuation in their current disposable income, as suggested by the Keynesian consumption function.The reason for this is that agents Want smooth consumption paths. This implied that the degree of self-stabilization of the economy was greater than Keynes suggested, and that consequently multipliers were smaller. Keynesian tried to fight the monetarist onslaught by strengthening Keynesian micro-foundations, especially of observed nominal rigidities. They plopped models with ;menu costs', ‘insider-outsider' labor markets, ‘asymmetric information'. These kept the door open for policy interventions to sustain aggregate demand. Nevertheless, Friedman's impact on macro-policy was swift and decisive.SLIDE 6 ‘We used to think that you could spend your way out of a rece ssion, and increase employment by cutting taxes and boosting Government spending. Tell you in all candor that that option no longer exists, and that in so far as it ever did exist, it only worked on each occasion since the war by injecting a bigger dose of inflation into the economy, followed by a higher level of employment as the next step'. Prime Minister James Callaghan (1976), Leader's speech, Blackball ‘The conquest of inflation should be the objective of macroeconomic policy.And the creation Of conditions conducive to growth and employment should be the objective of microeconomic policy'. Chancellor of Exchequer Engel Lawson (1 984), Mass Lecture Discretionary demand-management was out; balanced budgets were back. The unemployment target was replaced by an inflation target. The ;natural' rate of unemployment was to be lowered by supply-side policies, which included legislative curbs on trade unions. V. RATIONAL EXPECTATIONS AND THE NEW CLASSICAL ECONOMICS With rational e xpectations we enter the world of New Classical Economics. RE is the ‘radical wing of monetarism†¦ Est. known for the startling policy conclusion †¦ that macro-economic policies, both monetary and fiscal, are ineffective, even in the short-run'4. Rational expectations first appeared in the economic theory literature in a famous article by J. Mouth in 1961, but only filtered through to policy discussion in the early 1 sass with the work of Robert Lucas and Thomas Sergeant on business cycles, and Eugene Fame on financial markets. The Lucas critique Of adaptive expectations (1976) put paid to the idea Of an exploitable trade-off between employment and inflation.Friedman's adaptive expectations rely on gradual adjustment of expectations to the experienced behavior of a variable. But our knowledge includes not just what we have experienced but current pronouncements of public authorities and theoretical knowledge of aggregate relationships. For example, the Minister of Fi nance announces that he will increase money supply by 10% a year to stimulate employment. STEM tells us that an increase in the money supply will ease prices proportionately. So it is rational to expect inflation to be a year.All nominal values -interest rates, wage rates- are instantly adjusted to the expected rate of inflation. There is not even a brief interval of higher employment. Friedman's distinction between a Keynesian short run in which agents can be fooled and a Classical long run in which they know what to expect disappears. Adaptive behavior is a description of irrational behavior if agents know what to expect already. Notice though that in this example, rational expectations is defined as belief in the STEM.SLIDE 7 Expectations, since they are informed predictions of future events are essentially the same as the predictions of the relevant economic theory†¦ Expectations of firms (or more generally, the subjective probability distribution of outcomes) tend to be di stributed for the same information set, about the prediction Of the theory (or the ‘objective' probability distribution Of outcomes)' (G. K Shaw (1 984), 56) Formally, the rational expectations hypothesis (ERE) says that agents optimally utilities all available information about the economy and policy to construct their expectations.As such, such they have ‘rational' expectations. They are also rational in that they use their expectations to maximize their utility or profits. This does not imply that agents never make mistakes; agents may make mistakes on occasion. However, all that is there to be learnt has already been learnt, mistakes are assumed to be random, so that agents are correct on average. Agents learn the true value of parameters through repeated application of Bases' theorem. Eel they turn their subjective bets into objective probability distributions.An equivalent statement is that agents â€Å"behave in says consistent with the models that predict how th ey will behave†6. Since the models contain all the available information, ii. They are rational expectations models, following the model minimizes the possibility of making expectation errors. At the core of the rational expectations hypothesis is the assumption that the model of the economy used by individuals in making their forecasts is the correct one -that is, that the economy behaves in a way predicted by the model.The math is simplified by the device of the Representative Agent, the sum of all agents, possessed of identical information and utility preferences. This micro-economic device means that the framework can be used to analyses the impact of policies on aggregate welfare, as welfare is the utility of the agents. The implication of the ERE is that outcomes will not differ systematically from what people expect them to be. If we take the price level, for instance, we can write: SLIDE 8 This says that the price level will only differ from the expectation if there is a surprise.So ex ante, the price anticipated is equal to the expectation. [E[P] is the rational expectation based on all information up to date; is the error ERM, which has an expected value of zero, and is independent of the expectation. With rational expectations the Phillips Curve is vertical in the short-run and in the long-run. SLIDE 9. THE SERGEANT-LUCAS PHILLIPS CURVE. With rational expectations, government action can affect real variables only by surprise. Otherwise they will be fully anticipated. This rules out any fiscal or monetary intervention designed to improve an existing equilibrium.More generally ‘any portion Of policy that is a response to publicly available information -such as the unemployment rate or the index of leading indicators -is irrelevant to the real economy' 7. Policy can influence real variables only by using information not known to the public. The Efficient Market Hypothesis The application of rational expectations to financial markets is know n as the â€Å"Efficient Market Hypothesis† (MME), made popular by Eugene Fame (1970, 1976). The MME postulates that shares are always correctly priced on average because they adjust instantaneously and accurately to any newly released information.In the words of Fame, â€Å"l take the market efficiency hypothesis to be the simple statement that security prices fully reflect all available information† 8. So prices can't be wrong because if they were, someone would seek to profit from the error and correct it. It follows that according to the efficient market hypothesis, it is impossible to consistently achieve returns in excess of average market returns (beat the market). In an RE joke, two economists spot a $10 bill on the ground. One stoops to pick it up, whereupon the other interjects, ‘Don't.If it were really $1 0, it wouldn't be there anymore. † The efficient market hypothesis is the modern manifestation of Adam Smith's ‘invisible hand'. Increase d regulation can only aka markets less efficient because regulators have less information than those engaged in the market, risking their own money. There are different versions of the efficient market hypothesis. In its ‘weak' form, investors make predictions about current prices only using historical information about past prices (like in adaptive expectations).In its ‘semi-strong' form, investors take into account all publicly available information, including past-prices. (This is the most ‘accurate' and the closest to rational expectations). In its ‘strong' form, investors take into account all information that can possibly be known, including insider information. Rational expectations models rely heavily on math. Lucas defined expectations as the mean Of a distribution of a random variable. The greater the number of observations of a random variable, the more likely it is to have a bell shaped or Normal distribution.The mean of the distribution, in ordin ary parlance the average of the observations, is called the Expectation of the distribution. In the bell-shaped distribution, it coincides with the peak of the bell. Those who are supposed to hold Rational Expectations (ii all of us) are assumed to know how the systematic parts of he model determine a price. We use that knowledge to generate our prediction. This will be correct except for random influences. We can assume that such random events will also adhere to the bell-shaped distribution and that their mean/expectation will be zero.Thus the systematic or deterministic prediction based on theory is always correct. Errors have zero expectation. The tendency of the MME, as is readily seen, is to rule out, or minimize, the possibility Of bubbles -and therefore crashes; more generally to rule out the possibility of crises being generated within the financial system: historically he most important source of crises. This being so, policy did not have to pay much attention to banks. Fo llowing the acceptance of the MME, the financial system was extensively De-regulated.Real Business cycle DOGS DOGS modeling takes root in New Classical macroeconomics, where the works of Lucas (1975), Jutland and Prescott (1982), and Long and Peoples (1983) were most prominent. The earlier DOGS models were pure real business cycle (RIB) models. ii models that attempted to explain business cycles in terms of real productivity or consumption shocks, abstracting from money. The logic behind RIB models is clear. If money cannot affect real variables, the source of any disturbance to the real economy must be non-monetary.If we are all modeled as having rational expectations, business fluctuations must be caused by ‘real' and ‘unanticipated' ‘shocks'. (Notice the use of word ‘shock'). These shocks make the economy dynamic and stochastic. Unemployment is explained in these models by rational adjustments by workers of their work/leisure trade off to shifts in product ivity. This is a fancy way of saying that there is never any unemployment. As a result of continuously re-optimizing agents, economies in DOGS models re always in some form of equilibrium, whether in the short run or long run.The economy always starts from an equilibrium position, and even when there is a shock, it immediately jumps onto an equilibrium time path – the saddle path. So the economy never finds itself in a position of disequilibrium. SLIDE 10 ‘The model provides an example of an economy where real shocks drive output movements. Because the economy is Wallabies, the movements are the optimal response to the shocks. Thus, contrary to the conventional wisdom about macroeconomic fluctuations, here fluctuations do not reflect NY market failures, and government interventions to mitigate them can only reduce welfare.In short, the implication of real-business cycle models, in their strongest form, is that observed aggregate output movements represent the time-varyi ng Parent optimum'. (Roomer (2011 ) â€Å"Advanced Macroeconomics†, 204) Translated into English: depressions are optimal; any attempt to mitigate them will only make things worse. Later came the New Keynesian who preserved the basic framework of the New Classical RIB/DOGS models, but added ‘market frictions', like monopolistic competition and nominal rigidities, to make the models more applicable to the real world. Critiques: 1 .The fundamental criticism is that this whole class of New Classical models carries an intellectual theorem -that agents are rational optimizers – to an extreme and absurd conclusion. By postulating complete information and complete markets, ii. By abolishing Keynesian or Knighting uncertainty, they cut off enquiry into what might be rational behavior under uncertainty -such as ‘herd behavior'. They also exclude irrational expectations. Behavioral economics only really took off after the crisis. 2. The aim of New Classical economics was to unify macro and micro by giving macro-economic secure micro-foundations.Macroeconomic models should be based on optimization by firms and consumers. But New Classical models are not well grounded in micro-economics since their account of human behavior is seriously incomplete. 3. Ay defining rational as the mean of a random distribution, the New Classical models rule out as too exceptional to worry about ‘fat tails' – that is extreme events with disproportionately large consequences. 4. The vast majority of DOGS models utilities log-landslides utility functions which eliminate the possibility of multiple equilibrium. 1 5. New Classical models have no place for money, and therefore for money hoarding, which depends on uncertainty. In pure DOGS models there is no financial sector. DOGS models depend on what Goodhearted calls the ‘transversally condition', which says that â€Å"by the end of the day, or when the model stops, all agents shall have repaid all their debts, including all the interest owed, with certainty. In other words, when a person dies he/she has zero assets left' 12. Defaults cannot happen. This is another kind of logical madness.

Monday, July 29, 2019

Evaluation and presentation Assignment Example | Topics and Well Written Essays - 4000 words

Evaluation and presentation - Assignment Example So the companies prefer to use this method because of its flexibility in its frame and works according to the company’s demand. It is the duty of the system to keep inform the user about every action which is being taken against the user’s request, by giving proper response in the form of feedback in an accepted time period. This information keeps the user interactive with the system at each step which is being taken. The language used by the system should be understandable to the user. Instead of using the system’s language terminologies it should use those words, concepts and phrases which users are familiar with. The set of information should be designed in such manner that would show in natural and logical sequence. So the user would easily understand the frames and interpret the information or situation quickly. Most of the time users make mistakes while using the applications in speed so they want to leave the unwanted page without viewing the complete information. The option of emergency exit should be given to save the precious time of the user. This application will let the user to come out easily from an unwanted situation occurred by mistake. Different type of words, concepts and actions should not be given which creates doubt for the user in relating the situations in sequence. It should chase the usual platforms and make a consistent usage in applications without creating any hurdle. The system should be designed with a consideration of identifying the error occurring messages at the initial stage. Also it should eliminate those conditions which cause errors or check for the user. The option of confirmation should be given to users before they get committed with any action. This option will contain the user on the right path without committing with errors and also without wasting the time. It is necessary to display the visibility of those actions, objects and options which minimize

Sunday, July 28, 2019

Change Management Essay Example | Topics and Well Written Essays - 750 words

Change Management - Essay Example As an employee of a company undergoing restructuring I would like to receive information regarding how the changes in the organization are going to affect my day to day activities as an employee of the company. I would also like to receive reassurances to ensure the company offers the employees job security. Often during restructuring the executive managerial staff decides to downsize and cut jobs. It is important for the company to provide its employees with the company’s new mission and vision to ensure the firm’s goals are aligned with the expectations of the employees. If any new managers are going to be hired as an employee I would like to know what changes are going to occur among the managerial staff. Leadership is important and the employees follow leaders that earn their trust and respect through their actions and accomplishments. From whom would you like to get this information? Why? I would prefer if the information came directly from the board of directors o r from the executive managerial staff. The managers of a company are responsible for the well being and performance of the employees. A restructuring plan is a major strategic decision that must be made by the people in charge of a corporation. I would rather get the information from a manager because getting the information from such a source ensures its validity. ... The reason this is the best option is because these forms of communication are reliable and they are official. The employees of a firm can trust communication that comes from the company directly. Effective memos clearly define their audience to ensure the proper stakeholder receives the information it needs (Colostate, 2012). One of the advantages of using internal forms of communication is that its use constitutes a documentation of information by the company. Keeping a record of communication can prevent misinterpretation of data and information. What would be the worst way of receiving this information? Why? The worst way of receiving this information is through informal channels such as the press. The press often changes the story to make it more sensational. The purpose of the press is to create a story that the readers will like. Often the press does not care about the truth since their objective is to obtain the highest ratings possible. Four forms of press coverage are telev ision, radio, internet, and the written press. In 2011 the total U.S. communication and media spending reached $1.12 trillion and spending is projected to reach $1.41 trillion by the year 2015 (Plunkettresearchonline, 2012). It is not in a company’s best interest for employees to obtain information regarding corporate restructuring plans from the press because their information may be biased, inaccurate, and corrupted. Information that the employees obtain from the press can often cause panic among the employees. The press can also disrupt information in order to create controversy and chaos in an attempt to keep a story going. As a manager of

Saturday, July 27, 2019

ICI Essay Example | Topics and Well Written Essays - 2500 words

ICI - Essay Example This approach was considered a leader in IT delivery projects. They were broken down into different components and distributed to onsite, near shore and offshore sites, to deliver them at maximum value and the most cost efficient way. This cut their costs by 30%, because their time to market using the world wide teams made working 24 hours a day a reality. Infosys had a different approach to implementing their technology. Instead of analyzing the firm’s processes and redesigning them, ICI looked at the process requirements, not the functional requirements. They found that inefficiencies could be found better through horizontal processes rather than vertical functions, like sales and marketing. They also looked at process metrics. The idea was that everything ICI did needed to impact their client’s performance and thus increase shareholder values. ICI used the GDM to employ an onsite team that worked with the client to see how the company operated and was organized. Most companies had five to ten level processes used to develop a product from an idea into a reality. ICI was able to identify these, and organized their team accordingly. The onsite team used process experts and SAP experts. At night, the offshore teams turned the templates into a configuration. Then, the next day, the onsite team would test the configuration with the client and create a second iteration. That night, the offshore team would develop it. Typically there were four or five iterations for each process object. Under typical circumstances, each one would take a week. In the case of 200 process objects, the configuration for all of them would take approximately six months using ICI’s methods, whereas traditional methods could take up to a year. This also resulted in higher client satisfaction, because it could be tested during the design and config uration processes. ICI also created a 1-1-3 model for cost reduction. What this does is allows business consulting resources at the market rate, an onsite IT implementation resource at a lower than average rate, and three offshore developers at lower than market rates. What this did was allowed ICI to conduct major engagements for about $100 per hour, versus market rates having an average combined rate of $300 to $400. ICI followed Infosys in its philosophy of measure everything. InfoSys created a high awareness of quality of work using its Capability Maturity Model (CMM). This model judged the maturity of software processes currently in use and identified what was required to increase the maturity of them. ICI’s ideas from a broad service stance were assisting clients in dealing with their technology related issues in customer, product and corporate operations. ICI found that the best way to deliver value was to make marked improvements in the process metrics in the clients operation through its engagement with ICI. So, every transformation and consulting engagement resulted in a concerted effort on the part of ICI to deliver as much improvement in process metrics in the hope of creating a positive shareholder value. ICI would analyze their current operations, assess process metrics of each process and design changes in the structure that enabled the technology to deliver marked improvement in the process metrics. In the quote-to-cash process, they measured the time between the quote submission and time of payment. Their capacity fro processing those orders in a certain timeframe. What percentage of orders had zero

Friday, July 26, 2019

Major HR issues and International managers Essay

Major HR issues and International managers - Essay Example Thus, â€Å"globalization is mainly process driven by international trade and investment for the benefit of the investor as well as the host country, with particularly emphasis on the employees as well as employees on both sides† (Rothenberg 2002, p.1). While employing local employees in the host country as well as employees from the foreign countries or expatriate employees, organizations have to implement certain International Human Resource Management (IHRM) aspects to manage those employees optimally and effectively. This is where the role of international managers assumes importance. That is, the international managers by using IHRM concepts have to play a prominent role in the management of human resources or employees particularly foreign or expatriate employees. Human Resource Management (HRM) is concerned with the way in which organizations manage their people (Redman and Wilkinson 2001). So, this paper will discuss how the employee centric HRM aspects like appraisal and assessment techniques, rewarding system and importantly training has to be managed effectively by the International managers to enhance the manageability of the local employees working in an MNC under foreign management as well as the expatriate employees working in an MNC under foreign management, in total employees who are working under foreign or international management. Role of International Human Resources Management (IHRM) Organizations particularly MNCs, will not remain â€Å"static†. They will or have to break ‘boundaries’ both from geographical perspective as well as economical perspective to utilize the opportunities in the new markets or countries and emerge successful. Thus, internationalisation is a happening concept which is being used by many firms to expand their reach globally. â€Å"As the global economy expands, as more products and services compete on a global basis and as more and more firms operate outside their countries of origin, th e impact on various business functions becomes more pronounced† (Briscoe and Schuler 2004, p. 20). When the organizations enter new countries as part of their global expansion plans, they will recruit employees from the local population. They will do that as a feasible as well as a responsible thing. That is, feasible thing in the sense, as the MNC will be stationed in those host countries, recruiting from local places will be an easy process than bringing employees from their home country or other Third countries. (Scullion and Collings 2006). In addition, as they will be manufacturing and marketing product or service for the local population, local employees will be the best choice. Importantly, it is a responsible thing because through recruitment of local employees, MNCs will try to give a share of their benefits. Although it is an unwritten rule, organizations are duty bound to recruit the local employees. Apart from fulfilling their responsibility, this recruitment of lo cal employees importantly will provide the MNCs with cheap and surplus labour. Thus, with the recruitment of local employees being a key component of MNC’s operations, the recruited employees have to be managed optimally by the International managers on the basis of effective HR policies or IHRM policies. After the

Literacy Narrative Essay Example | Topics and Well Written Essays - 500 words - 2

Literacy Narrative - Essay Example When I reached the school-going age, I enrolled into the elementary school, where my teachers taught some of the simple aspects of language and pronunciations. I developed a keen interest in reading picture stories; I could easily connect the pictures to form coherent stories. My elder brother was very supportive and could often help me collect several picture books, which I could read and narrate the stories to him and my parents. Sometimes, they were very excited not because I could tell the stories well, but because I could spell and pronounced some words in a comical way just because I had developed proper literacy skills. I remember a day when my brother took me for a walk in the nearest town, I forced him to take me to the bookshop and see if I could find some nice story books. Though he was reluctant, he agreed to take me on condition that I would not cry for a book because he did not have enough money. However, the carving for a certain interesting book at the bookshop made me to hide it in my shirt, thinking that no one would know about it. When we reached at the door, I was surprised when the door scanner sounded an alarm; the attendant smiled at me and asked my brother to pay and never to scold at me since I was just a small boy. I was very embarrassed and vowed never to do it again. I went home and was happy to have the book inasmuch as it had disappointed my elder brother. During a function at school, I was chosen to make a presentation to the visitors who were gracing the occasion. I received a great applause from the audience and encouragement from my teachers, who noted that my literacy skills had really improved. The love for reading has propelled me to excel I my passion for research. So far, I have written various novels that explain about my life into written literature and research and others that are mere fictions. I

Thursday, July 25, 2019

On The Semiosphere by Juri Lotman Essay Example | Topics and Well Written Essays - 1750 words

On The Semiosphere by Juri Lotman - Essay Example Here, Lotman has successfully put forth an example which could make his complex arguments very clear to even a lay person, through the use of analogy. The allegory is easy to understand for the reader. The overall conclusion that this essay arrives at is that, â€Å"the levels of the semiosphere comprise an inter-connected group of semiospheres, each of them being simultaneously both participant in the dialogue (as part of the semiosphere) and the space of dialogue (the semiosphere as a whole)† (Lotman, 205). Lotman has started his essay by referring to the two major schools of thought in semiotics, one the Saussurian school which focuses on the ‘act of communication’ and the Piercean school which stresses on the ‘sign’, the basic, coded, element of communication (205). Then he moves ahead to point out that both these schools have some thing in common, which is the stress that they give either to a single communication act or a single sign, ie; a sing le, atomic element (Lotman, 206). By describing this background, Lotman starts a logical reasoning process by giving chronological data and putting it as a block in the very beginning of the essay.   He has argued that, in this way, â€Å"the individual act of sign exchange has come to be regarded as a model of natural language [†¦] -as universal semiotic models† (Lotman, 206).In the next step of his rhetoric, Lotman has contested this conventional thought. He has opined that, this kind of reasoning is part of the traditional and flawed scientific thinking., where one is tempted always to move â€Å"from the simple to the complex’, and whereby one gets trapped into attributing a character to the object of study, just because it provided some convenience to make an analysis (Lotman, 206). In this way, Lotman has been applying the method of logos which says that the study of communication, or

Wednesday, July 24, 2019

Strategic and Transport Planning Essay Example | Topics and Well Written Essays - 2750 words

Strategic and Transport Planning - Essay Example Question one (A). The chief advantages/disadvantages of shifting more freight from road to rail. Modes of transportation are many in the United Kingdom. The movement of these goods include pipelines, truck, rail, water, and air. The progress of goods moved by way of a truck, per recent statistics; show very small increases. However, in contrast this same amount of freight moved by rail is comparable when one considers mathematically that when the amount multiplied the distance of this measurement is in ton-miles. Advantages Any thriving growing economy has to depend upon freight transportation. Freight lines are is a critical component of any economy. In the United Kingdom, it necessitates that the improved punctuality and reliability, tracked in rail services delivery was at least 85% in 2006. By 2010, that number increased substantially. The use of public transport (bus and light rail), has increased by more than 12% in England. When readers compared this with 2000, it maintains steady growth in every region (stalban.gov). Very impressive when readers can look at the reduction of the amount people killed or seriously injured in Great Britain on all road accidents. In a literal context, the numbers decreased by 40% and the number of children's death toll went down by 50%. These government statistics shows the UK has made large improvement to the death percentages and overall death tolls have decreased. This further shows a dramatic difference especially, in several of disadvantaged communities that, statistically by 2010 compared with the average deaths that reported in the past of 1994-98. The UK quietly has dropped many targets fixed in the 10-Year Transport Plan published in 2000: Found in (Future of Transport, 2030). †¢Source: the Future of Transport: a network for 2030 Disadvantageous Going by rail has several disadvantageous. However, when looking at the advantageous is seems almost insignificant. Rail has limited routes at times, is just does not stop everywhere. The routes and the timetable s seem to be a bit inflexible. It can be more expensive if the corporation has a large amount of freight to haul, and it can sometimes be unreliable. Question two (B). Using an example of a major rail freight facility describes the opportunities that may take up by industry. The UK shows this to be a monumental success story in rail freight in the transport sector over the last 15 years. An estimated ?one point five billion of investment is in rolling stock, terminals, and support facilities show growth of over 60% that the industry achieved. The industry’s reliability and punctuality in all business segments meets its customers’ requirements. A very high and improving percentage of inter-modal services in past arrivals were at their destinations on time (stalbans.gov). The company runs more than 5,000 freight trains a day throughout Europe and is the parent business of DB Schenker rail (UK) Ltd (DBSR). DBAG’s purchase of EWS was a strategic move to offer a netw ork of integrated rail services throughout Europe. DBSR announced the establishment of a new service for the temperature-controlled product collected from suppliers called Tesco goes through Spain, then transported by rail across France and through the Channel Tunnel to London. One train per day initially covers this service. This gives Tesco and other major UK retails significant potential for

Tuesday, July 23, 2019

Multi-professional working Essay Example | Topics and Well Written Essays - 1500 words

Multi-professional working - Essay Example The nations which have paid attention in this issue and taken necessary steps to improve the child and mother health, have achieved drop in new born mortality rates (WHO, 2005). In the cycle of life an individual depends upon the availability of health care professionals not only to save the life but also to improve the physical and mental conditions of the human beings particularly (International Federation of Gynecology & Obstetrics). When taken in the context of mother child care systems these professions range from nurses to experts. They have majorly focused upon the availability of multi professional experts at the time of delivery at the hospitals (Simpson et al, 2006; Mann and Pratt, 2006; Nielsen et al, 2007; Williams 2008). These professionals majorly are nurses, midwives, physicians, obstetricians, neonatologists, anesthesiologists (Physician trained in anesthesia) (ABA, 2009) and pharmacists and the pediatricians who can examine the child right after birth for any kind of disease or life threatening condition (International Federation of Gynecology & Obstetrics). The approach to females with assumed preterm labor has altered slightly in the past 3 decades. The main element to that method is the slowdown of complete inhibition of contractions (Simhan, 2007; Caritis,et al, 1979) but to use such methods only professional team could work. Health care professionals and managers require a very strong and integrated system of care at both the local and the national level (Princeton University, 2007). These are particular in case of deliveries and C-sections. If the system of this multi professionalism is integrated at the hospitals at every level particularly in the gynecological departments millions of deaths and disabilities can be avoided as complications in the delivery can cause severe mental or physical retardations (Lane, 1987; Stockham and Alice, 1891) The major issue in his view point is also the lack of experience during the critical conditions a nd non availability of multi professionals at the time of need particularly in critical or unexpected conditions like C-sections (Ramondt, 1990). The midwifery in separate is not recommended but when the midwives join the complete health care professional team they should be given the status of a special professional (Golden, 2002; Bailey, 1998). This should be taken into account by the fact that they provide quality care and support to the mother during the child bearing and right after birth. This helps the mother to establish a loveable and comfortable relationship with the baby right after birth. They also help the mother to feed the baby immediately after birth which is a difficult task as the new born is a bit tricky to feed (Harper, 2006). To explain all these facts in detail and to establish the importance of multi professionalism in the field of gynecological departments with particular reference to the child delivery case, a special clinical case is discussed below. Case S tudy In this case study we will see the inter relationship of various people and professionals in the child birth procedure. The scenario in the case is that pregnant women with gravid 1 plus 0 primp arrived into the hospital. This means that the woman was having her first pregnancy or she had been pregnant before but had not given birth i.e. she might have had an abortion or miscarriage

Monday, July 22, 2019

Genetically- Modified Foods and Ingredients Essay Example for Free

Genetically- Modified Foods and Ingredients Essay Visiting a supermarket has become a usual experience of each one of us. We have to read all the labels that warn us against genetically-modified ingredients and its expiry date. I, try myself to calm down that epidemic services take control over dangerous products. Yet, I have my doubts in eating my favorite snack, French fries that is done in a fast food. The cumulative effect of genetically-modified foods is particularly dangerous for sensitive populations, including kids, elderly people and people who have indigestion people and even us, whose normal healthy living people. I have studies literature on genetically-modified ingredients, trying to be objective in my judgment. Genetically-modified ingredients advance the modern biology achievements. These products and ingredients strengthen the resistance to herbicides and improve nutritional food contents. Genetically-modified (GM) foods production lessens time-consuming than conventional breeding. Molecular Biologists have not discovered yet how harmful GM products and ingredients are but they claim that GM foods may be environmentally hazardous. Only allergy was recognized as negative effect of GM foods. We, cannot break out from the GM products, since two-thirds of genetically modified crops are corn, cotton, soybeans, potatoes even the fruits we eat. This is just a sign that we should be well inform on what is going in our world especially in our foods that can affect our living. We cannot escape from this advancement in our modern world but we, people, can prevent this to have real healthy living life style.

An Introduction To Network Topology

An Introduction To Network Topology In the context of a communication network, the term topology refers to that way in which the end points, or stations, attached to the network are interconnected or it is the arrangements of systems in a computer network. It can be either physical or logical. The physical topology refers that, a way in which a network is laid out physically and it will include the devices, installation and location. Logical topology refers that how a data transfers in a network as opposed to its design. The network topology can be categorized into bus, ring, star, tree and mesh. Hybrid networks (They are the complex networks, which can be built of two or more topologies). Bus Topology A Bus topology is characterized by the use of a multi point medium. A long and single cable acts as a backbone to connect all the devices in a network. In a bus topology, all computers or stations attach through the appropriate hardware interfacing known as a tap, directly to a bus network. Full duplex operation between the station and tap allows data to transmit onto the bus and received from the bus. A transmission from any station propagates the length of the medium in both directions and can be received by all other stations. At each end of the bus is a terminator, which absorbs any signal, removing it from the bus. Nodes are connected to the bus cable by drop lines and taps. A drop line is a connection running between the device and the main cable. A tap is a connector that either splices into the main cable or punctures the sheathing of a cable to create a contact with the metallic core. A bus network work best with a limited number of computers. Advantages Bus topology can install very easily on a network. Cabling will be less compare to other topologies because of the main backbone cable laid efficiently in the network path. Bus topology suited for a small network. If one computer fails in the network, the other computers are not affected they will continue to work. It is also less expensive than star topology. Disadvantages The cable length will limited and there by limits the number of stations. If the backbone cable fails, the entire network will goes down. It is very difficult to trouble shoot. Maintenance cost is very high in a long run. Terminators are required for both the ends of the cable. Ring topology The ring topology the network consists of dedicated point to point connection and a set of repeaters in a closed loop. A signal is passed along the ring in one direction, from device to device, until it reaches its destination. It may be clock wise or anti clock wise. When a device receives a signal intend for another device, its repeater generates the bits and passes them along. As with the bus and tree, data are transmitted in frames. As a frame circulates past all the other stations, the destination station recognize its address and copies the frame into a local buffer as it goes by. The frame continues to circulate until it returns to the source station, where it is removed. These topologies are used in school campuses and some office buildings. tifsTemp 13.a.tif Figure (2) Bus topology Advantages It performs better than star topology under heavy work load For managing the connection between the computers, there is no need for the network server. It is cheaper than star topology because of less wiring. By adding the token ring in the network, can create large network. Very order network because all the devices has a access to the token ring and opportunity to transmit. Disadvantages A break in the ring (such as a disabled station) can disable the entire network. It is much slower than an Ethernet network with under normal load. Any moves, changes and adds of the devices can affect the network. Network connection devices like (Network adapter cards and MAU) are much more expense than Ethernet cards. Star Topology In a star topology, each station is directly connected to a common node called hub. Unlike a mesh technology, the devices are not directly linked to one another. A star topology does not allow direct traffic between devices. The controller act as an exchange, like if one device wants to send to another, it sends the data to the controller, which then relays the data to the connected device. In a star, each device needs only one link and one I/O port to connect it to any number of others. The star topology is used in local area networks (LAN) and sometimes high speed LAN often uses a star topology with central hub. Advantages If one link fails in the star topology, only that link is affected. All other links remain active. It is easy to identify the fault and fault isolation. Easy to expand the network in the star topology. No disruptions to the network when connecting or removing devices. It is very easy to manage because of its simplicity in the function. Disadvantages In a star topology, if the hub goes down, the entire network will fails. It requires more cable length compared to the linear bus topology. It is much more expensive than bus topology, because of the cost of the hubs. Tree Topology A tree topology is the generalization of the bus topology. It integrates the multiple star topologies together on to a bus. The transmission medium is a branching cable with no closed loops. The tree layout begins at a point known as the head end. The branches in turn may have additional branches to allow quite complex layouts. A transmission from any station propagates throughout the medium and can be received by all other stations. This topology will allow for the expansion of an existing network. Advantages Tree topology is well supported by the hardware and software vendors. Point to point wiring for each and every segments of the network. It is the best topology for the branched networks. Disadvantages It is more expensive because more hubs are required to install the network. Tree topology is entirely depends upon the backbone line, if it fails then the entire network would fail. It is very difficult to configure and wire than other network topologies. In a tree topology, the length of network depends on the type of cable being used. Mesh Topology In a mesh topology, every device has a dedicated point-to-point link to every other device. The term dedicated means that the link carries traffic only between the two devices it connects. To find the number of physical links in a fully connected mesh network with n nodes, we first consider that each node must be connected to other node. Node 1 must be connected to n-1nodes, node 2 must be connected to n-1nodes, and finally node n must be connected n-1 nodes. However, if each physical link allows communication in both directions, we can divide the number of links by 2.In other words we can say that in a mesh topology, we need n (n-1)/2. tifsTmp9.tif Figure (5 Mesh topology Suppose if we are connecting 15 nodes in a mesh topology, then the number of cables required; DA = n (n-1)/2 DA = Number of cables = 15 (15 1)/2 n = Node = 15*14/2 = 15*7 = 105 Therefore, the total number of cables required for connecting 15 nodes = 105. Advantages There is no traffic problem because of the dedicated link in the mesh network. Mesh topology is robust. If one link becomes unusable. It does not incapacitate the entire system. Point-to-point links make full identification and fault isolation easy. Security or privacy for data travels along the dedicated line. Network can be expanded without any disruptions to the users. Disadvantages Installation and reconnection are difficult. Large amount of cabling and the number of I/O ports required Sheer bulk of the wiring can be greater than the available space can accommodate. The hardware required to connect each link can be prohibitively expensive. Hybrid Topology A network can be hybrid, which uses two or more network topologies together in a network. For example, we can have a main star topology with each branch connecting several stations in a bus topology. The OSI Model The Open System Inter connection (OSI) reference model was developed by the International Organization for Standardization (ISO)2 as a model for a computer protocol architecture and as a frame work for developing protocol standards. The purpose of the OSI model is show how to facilitate communication between different systems without requiring changes to the logic of the underlying hardware and software. The OSI model is not a protocol; it is a model for understanding a network architecture that is flexible, robust and interoperable. The OSI model is a layered frame work for the design of network systems that allows communication between all types of computer systems. It consists of seven separate but related layers, each of which defines a part of the process moving information across a network. The seven layers of the OSI reference model can be divided into two categories: upper layers and lower layers. Upper Layers of the OSI Models are; Application layer Presentation layer Session layer The upper layers of the OSI model designate the application issues, presentation session stages and generally are implemented only in software. The highest layer, (the application layer) is close to the end user. These upper layers are act as an interface between the user and the computer. The term upper layer is sometimes used to refer to any layer above another layer in the OSI model. Examples of upper layer technologies in the OSI model are SNMP, FTP, and WWW etc. Lower Layers of the OSI Model Transport layer Network layer Data link layer Physical layer The lower layers of the OSI model provide network specific functions like data transport issues (flow control, addressing and routing). The lower layers of the OSI model (the physical layer and the data link layer) are implemented in hardware and software also. Examples of lower layer technologies in the OSI model are TCP, UDP, IP, IPX etc. Application layer The application layer enables the user, whether human or software, to access the network. It provides user interfaces and support for services such as electronic mail, remote file access and transfer, shared database management, and other types of distributed information services. The application layer provides specific services like network virtual terminal, file transfer, access and management, mail services and directory services. Network virtual terminal: A network virtual terminal is a software version of physical terminal, and it allows a user to log on to a remote host. File transfer, access and management: This application allows a user to access files in a remote host (to make changes, read data), to retrieve files from a remote computer for use in the local computer and to manage or control files in a remote computer locally. Mail services: The application provides the basis for e-mail forwarding and storage. Directory services: This application provides distributed database source and access for global information about various objects and services. Presentation layer The presentation layer is concerned with the syntax and semantics of the information exchanged between two systems. The presentation layer is responsible for the translation, compression and encryption. Messages are sending between the layers. Translation: The process in two systems are usually exchanging in the form of character strings, numbers, and so on. The information is changed into bit streams before being transmitted. The presentation layer at the sender changes the information from its sender dependent format into a common format. On the receiving machine, the presentation layer changes the common format into its receiver-dependent format. Encryption: Encryption means that the sender transforms the original information to another form and sends the resulting message out over the network. Decryption reverses the original process to transform message back to its original form. Compression: Data compression reduces the number of bits contained in the information. It becomes particularly important in the transmission of multimedia such as text, audio and video. Session layer The session layer is the network dialog controller. It establishes, maintains and synchronizes the interaction among communicating systems. These layers have specific responsibilities include the following; Dialog control: The session layer allows two systems to enter into a dialog. It allows the communication between twp processes to take place in either half duplex (one way at a time) or full duplex (two ways at a time) mode. Synchronization: The session layer allows a process to add check points, or synchronization points, to a stream of data. Examples for session layers are MPEG, JPEG, MIDI, NCP etc. Transport layer The transport layer is responsible for process to process delivery of the entire message. The transport layer is responsible for the delivery of a message from one process to another. A process is an application program running on a host. The transport layer ensures that the whole message arrives intact and in order, overseeing both error control and flow control at the source-to-destination level. It also has some specific responsibilities mentioned below; Service-point addressing: The transport layer includes a type of address called a service-point address (or port address). The network layer gets each packet to the correct computer,; the transport layer gets the entire message to the correct process on that computer. Segmentation and reassembly: A message is divided into transmittable segments, with each segment containing a sequence number. These numbers enable the transport layers to reassemble the message correctly upon arriving at the destination and to identify and replace packets that were lost in transmission. Connection control: The transport layer can be either connectionless or connection oriented. A connectionless transport layer treats each segment as an independent packet and delivers it to the transport layer at the destination machine. If a connection oriented transport layer make a connection with the transport layer at the destination machine first before delivering the packets. After all the data are transferred the connection is terminated. Flow control: The transport layer is responsible for the flow control. However, flow control at this layer is performed end to end rather than across a single link. Error control: Transport layer is also responsible for the error control. Error control at this layer is performed process-to-process rather than across a single link. The sending transport layer makes sure that the entire message arrives at the receiving transport layer without error. These layers using the TCP/IP and UDP protocols. Network layer The network layer is responsible for the source to destination delivery of a packet, possibly across multiple networks (links). This layer ensures that each packet gets from its point of origin to its final destination. Network layers also have other responsibilities include the following; Logical addressing: If a packet passes the network boundary, it needs another addressing system to help distinguish the source and destination systems. The network layer adds a header to the packet coming from the upper layer that, among other things, includes the logical addresses of the sender and receiver. Routing: When independent networks are connected to create internetworks or a large network, the connecting devices route or switch the packets to their final destination. Data link layer The data link transforms the physical layer, a raw transmission facility, to a reliable link. It makes the physical layer appear error-free to the upper layer. It also has other responsibilities include the following; Framing: The data link layer divides the stream of bits received from the network layer into manageable data units called frames. Physical addressing: The data link layer adds a header to the frame to define the sender and/or receive of the frame. If the frame is intend for a system outside the senders network, the receiver address is the address of the device that connects the network to the next one. Flow control: If the rate at which the data are absorbed by receiver is less than the rate at which data are produced in the sender, the data link layer impose a flow control mechanism to avoid overwhelming the receiver. Error control: The data link layer adds reliability to the physical layer by adding mechanisms to detect and retransmit damaged or lost frames. It also uses a mechanism to recognize duplicate frames. Error control is normally achieved through a trailer added to the end of the frame. Access control: When two or more devices are connected to the same link, data link layer protocols are necessary to determine which device has control over the link at any given time. Data link contains two sub layers; LLC (Logical Link Control) and MAC (Medium Access Control).LLC is the upper sub layer, which maintains and establishes the communication links to the device. And it also responsible for the frame error control and addressing.MAC is the lower sub layer of the data link layer. It controls how the devices sharing the media channel. Physical layer The physical layer coordinates the functions required to carry a bit stream over a physical medium. It deals with the mechanical and electrical specifications of the interface and transmission medium. It also defines the procedures and functions that physical devices and interfaces have to perform for transmission to occur. The physical layer is also concerned with the following: Physical characteristics of interface and medium: Physical layer defines the characteristics of the interface between the devices and the transmission medium. It also defines the type of transmission medium. Representation of bits: This layer data consists of a stream of bits with no interpretation. To be transmitted, bits must be encoded into signals electrical or optical. The physical layer defines the type of encoding. Data rate: The transmission rate the number of bits sent each second- is also defined by physical layer. In other words physical layer defines the duration of a bit, which how long it lasts. Synchronization of bits: The sender and receiver not only must use the same bit rate but also must be synchronized at the bit level. Line configuration: The physical layer is concerned with the connection of devices to the media. In a point-to-point configuration, two devices are connected through a dedicated link. In a multipoint configuration a link is shared among several devices. Physical topology: The physical topology defines how devices are connected to make a network. Devices can be connected by using a mesh topology, a star topology, a ring topology, a bus topology, or a hybrid topology. Transmission mode: The physical layer also defines the direction of transmission between two devices: simplex, half duplex, or full duplex.

Sunday, July 21, 2019

Limiting Reactant Effect on Lab

Limiting Reactant Effect on Lab The Limiting Reactant Lab ABSTRACT The purpose of is lab is to see how the limiting reactant effects the whole lab. To determine what the other limiting reactant was and how much of reactant was there. INTRODUCTION A[CS4] limiting reactant limits the reaction and controls the amount of product formed when balancing an equation and making calculations. When calculating a limiting reactant, two reactant masses given because once the limiting reactant is gone the reaction stops producing. Limiting reactant is the reactant that is completely used until there isnt any more of it and then the reactant will stop. (Shah, 2007) (Buthelezi, Dingrando, Hainen, Winstrom, Zike, 2013). Hypothesis is proposed that if there is a sufficient amount of iron that the copper would not be precipitated. MATERIALS Ring Stand Filter Paper Distilled Water Stirring Rod Pipestem triangle Balance Copper(II) sulfate 100mL beaker Wire screen Weigh cups Iron filings 250mL beakers Bunsen burner METHODS The mass of 100mL beaker was measured using a balance. The mass of the weigh cups were taken using a balance, also. 8 grams of copper sulfate crystals were measured using a balance and placed into the 100mL beaker. A graduated cylinder was used to measure out 50 mL of water out to add into the crystals. A Bunsen burner was lit under the ring stand with the wire screen on the ring clamp holding the 100mL beaker and substances in place. The beaker was heated and stirred until just before it began to boil, then the gas was shut off to stop the flame [CS5]and the solution began to cool. 1.3 grams of iron filings were stirred into the hot copper sulfate crystals. The 100mL beaker was left to cool for ten minutes while observing the reaction taking place. A sheet of filter paper was taken including initials written [CS6]on it and weighed. A filtration system was made and placed into a funnel. The funnel was placed over an E[CS7]. flask. The liquid was poured slowly into the funnel going th rough the [CS8]filter paper and into the flask. With tap water the beaker was rinsed. When the solid settled[CS9], the beaker was rinsed two more times, until all of the solid was transferred to the filter paper. The filter paper was placed onto a watch glass and then placed into the oven. Once it is cooled[CS10], the mass of the beaker, filter paper and solid were recorded. RESULTS The[CS11] Limiting Reactants limit the reaction. Once the limiting reactant is gone[CS12] the reaction stops, it determines the amount of the product being produced. Single replacement reaction is a chemical reaction happens when certain atoms in one replace the atoms in another element[CS13]. If[CS14] you dont[CS15] balance and record your numbers then you cannot keep track of how much you have, and then it can make you[CS16] to use too much or to less of needed for the chemical reaction. Using dirty glass wear[CS17] can affect the weights[CS18] and the reaction occurring[CS19]. Mass[CS20] of empty 100mL beaker 70g Mass of copper (II) sulfate 8.0g Mass of iron filings 1.3g Substance collected 3.0g Mass of filter paper 2.0g Moles[CS21] of copper (II) sulfate equaled out to be 0.08 moles because 8.0 grams of copper sulfate multiplied by one mole of copper sulfate divided by 96g of copper sulfate equaled 0.08 moles. The amount of iron added to the solution calculated out to be 0.02 molFe because 1.3 multiplied by one mole Fe divided by 56 grams Fe equals 0.02molFe. There were 0.05 moles of substance produced due to 3 grams being multiplied by 1 and divided by 64 grams and equals to 0.05 moles of substance. Moles of iron metal reactant came out to be 2.8 moles because 8 grams multiplied 1 mole divided by 160 grams multiplied by 56 grams equals 2.8 moles. Copper (II) sulfate starts with 3 grams divided by 56 grams multiplied by 160 grams equals 8.57 moles CuSO4[CS22]. [CS23]The limiting reactant is the iron metal. Its the lowest number[CS24]. The CuSO4 was the excess due to having more left over an exact amount of 8.57 moles. The only error [CS25]that occurred in the lab would be the beaker not being quite as clean as needed In correct recording[CS26] of the Iron[CS27] filings. Also[CS28] that Some[CS29] of the iron was still built up along the sides of the beaker[CS30]. Some of the iron came out [CS31]before it reached the beaker with the copper (II) sulfate. [CS1]APA format says that the running head should be left aligned and page number right aligned. The title of your lab should also be in all caps. [CS2]Please do not use . Either use a comma or the word and. [CS3]Your abstract is missing. Please create a new page, make a center and bold heading Abstract, and complete. Your abstract should be a summary of the purpose of the lab and your findings. [CS4]Paragraphs should be indented. [CS5]Shutting the gas off stopped the flame. Please be specific and explain that instead. [CS6]Third person [CS7]Spell out the name. [CS8]And would work here also [CS9]comma [CS10]comma [CS11]The first line of the paragraph is indented only. [CS12]comma [CS13]Please clarify. [CS14]Just so this lab report will flow a little better, please put a sentence here about potential errors could have been caused by these things in the sentences that follow. [CS15]Please dont use contractions. [CS16]Third person, please. Try something like it would be possible to à ¢Ã¢â€š ¬Ã‚ ¦. [CS17]This should be spelled ware. [CS18]masses [CS19]If you say this, you need to explain how. Dirty glassware causes masses to be falsely increased due to contaminants or side reactions that use up the reactants to make an undesired product. [CS20]Put in a heading to label this table as Table 1. It should be bold and left aligned. [CS21]Please left align [CS22]This needs subscript. [CS23]This is very difficult to read. Please see me for correct format. Also, instead of going by the calculations listed in the form, calculate limiting reactant and excess in grams the way we did in class. [CS24]Instead of this, it needs to be specific. The limiting reactant was iron metal since calculations indicated that the smallest amount of copper could be produced. [CS25]Many errors are possible. Please mention incorrect measurements, faulty balances, side reactions due to dirty glassware. [CS26]Incorrect is one word [CS27]This does not need to be capitalized. [CS28]comma [CS29]Do not capitalize [CS30]Remained in the beaker [CS31]Came out? How did it come out? Be specific. Mention sticking to the weigh boat, spilling, or some other means of losing the iron.

Saturday, July 20, 2019

Computer Crime :: Technology

Formatting Problems Computer Crime One of the newest areas of crime is what we call computer crime. The at least seeming anonymity of computer technologies may actually encourage some people who would not otherwise be tempted to commit crimes to do so using the Internet. They may simply believe that they will never be caught, or they may not think about being caught at all. They may simply find the lure of committing virtual crimes too psychologically appealing to resist. Many of those who commit crimes on the Internet are in fact psychologically disturbed and need compassionate treatment by psychiatric professionals. However, this does not lessen the real harm that they can do to people and they must be stopped. Combating the global computer crime pandemic is becoming an increasingly urgent issue, as identity theft and spyware are occurring with alarming frequency. Early instances of computer crime found individuals, corporations and law enforcement unprepared, uninformed and immobilized to address computer crime re sponsively. This resulted in victims suffering long drawn-out battles to regain their identities. With no guidelines to assist them, many victims endured frustrating battles that yielded little benefit. Corporations likewise faced many obstacles in their uncharted course to recover from data theft. As defined, technology has created a gateway for computer criminals, allowing for easy access to personal or business computers via the internet. Cyber criminals use several different methods to infiltrate business and personal computers; fraudulent marketing schemes, on-line auctions, work-at-home schemes, gambling operations, and spam, just to name a few. Many times home owners and businesses have no idea they have been the victim of a cyber crime. Types of computer crime can sometimes lose their significance when we as citizens wrap it all up into one expression, â€Å"computer crime.† There needs to be a further breakdown and a better public understanding of what computer cri me actually is and these types of computer crime will hopefully shed some light on the current problems faced today. Cyber terrorism is the convergence of terrorism and cyberspace. It is generally understood to mean unlawful attacks and threats of attack against, computers, networks, and information stored on these mediums. It’s done to intimidate or coerce a government or its people in to promote political or social objectives. To qualify as cyber terrorism, an attack should result in; violence against persons or property, cause enough harm to generate fear, attacks that lead to death or bodily injury, explosions, plane crashes, and severe economic loss.

Friday, July 19, 2019

Use of Symbols in Yeatss Work, A Vision Essay -- Yeats Vision Essays

Use of Symbols in Yeats's Work, A Vision In his 1901 essay "Magic", Yeats writes, "I cannot now think symbols less than the greatest of all powers whether they are used consciously by the masters of magic, or half unconsciously by their successors, the poet, the musician and the artist" (p. 28). Later, in his introduction to A Vision, he explains, "I put the Tower and the Winding Stair together into evidence to show that my poetry has gained in self possession and power. I owe this change to an incredible experience" (Vision p.8). The experience he goes on to relate is the preliminary stage of the composition of the work itself. In A Vision, however, Yeats exhibits his poetic power as well, along with his knowledge of mysticism and affinity for symbology to illustrate the behavior of the forces of human consciousness and history. He ties these two cycles together into the overarching symbol of the work: the Great Wheel. This is a symbol that Yeats uses not only to explain the cycles of one individual's life, but al so through the same motions, to explain the cyclical movement of the centuries, and the conjunction of certain historical events. When asked about the factual reality of his cosmological descriptions, he replies that they are "purely symbolical ... [and] have helped me to hold in a single thought reality and justice" (Vision p.25). Though to a large extent obscure and complicated, these symbols are paramount to an understanding not only of the ideas contained in A Vision, also the thought process Yeats conveys in much of his poetry. The Great Wheel consists of and contains two opposing gyres, the primary and the antithetical, objectivity and subjectivity, which turn in opposite directions, the two... ...mary vein, men worshipping idols of far away deities, or return to its antithetical predecessor, in which man's idols seen as are actual living beings captured in myth. Eventually, he resigns himself to not knowing for certain what the future of mankind will be. He concludes "The particulars are the work of the thirteenth sphere, which is in every man and called by every man his freedom. Doubtless, for it can do all things and know all things, it knows what it will do with its own freedom, but it has kept the secret" (Vision p. 302). Works Cited: Adams, Hazard. The Book of Yeats's Vision. Ann Arbor: University of Michigan Press, 1995. Yeats, W.B. A Vision. New York: Macmillan, 1956. Yeats, W.B. The Poems. ed. Richard J. Finneran. New York: Macmillan, 1990. Yeats, W.B. "Magic". Essays and Introductions. New York: Macmillan, 1961. pp. 28-52.

The Portrayal of War in Charge of the Light Brigade and Dulce et Decoru

The Portrayal of War in Charge of the Light Brigade and Dulce et Decorum Est Both "Charge of the Light Brigade" by Alfred Lord Tennyson and "Dulce et Decorum est" by Wilfred Owen are poems about war. However, they were written in two very different contexts and about two very different wars. Charge of the Light Brigade describes a doomed cavalry charge made by British soldiers during The Battle of Balaclava in the Crimean War (1854-1857). Dulce et Decorum est, on the other hand, tells the story of a group of soldiers who were caught in a gas attack returning from the trenches of World War I (1914-1918) towards their "distant rest". Alfred Lord Tennyson was the "Poet Laureate" at that time and wrote the poem after reading about The Battle of Balaclava in the "Times" newspaper. This could have influenced how he portrayed the battle as he used secondary information which could be unreliable. In contrast, Wilfred Owen had first hand battle experiences from World war I, and so you would expect his information to be more reliable, However he could have emphasised certain points for poetic effect. In Dulce et Decorum est Wilfred Owen uses a variety of similes metaphors and other poetic techniques to describe the actions, appearance and mental states of the soldiers. Owen describes the soldiers as "Bent double, like old beggars under sacks, knock-kneed, coughing like hags", this simile shows that the war has had a very large effect on the soldiers as it gives the impression that they have aged prematurely and are in a very bad state of health. Owen also says that they limped on "blood-shod, all went lame, all blind", also giving the impression that the soldiers are in a very bad state of health, this emphasi... ...rge of the Light Brigade Tennyson gives the impression that the British soldiers were fighting a loosing battle. Tennyson gives the impression to the reader that he is not against the war even if they were going to loose but he still praises the soldiers for what they did. In Charge of the Light Brigade Tennyson contradicts Owens views, and instead says that it is honourable to die for ones country even if you loose the battle. In general I prefer Dulce et Decorum est because of what the poem stands for, Dulce et Decorum est give the views that war is a terrible thing and Owen is very against it whereas Tennyson seems to be a lot more pro war. Works Cited L. Bensel-Meyers. Literary Culture: Reading and Writing Literary Arguments. New York: Pearson Custom P,2000. Napierkowski, Marie Rose and Mary K Ruby. â€Å"Poetry for Students.† Vol 1 Detroit: 1998.

Thursday, July 18, 2019

Capital Structure Essay

Capital structure is how a company finances its overall operations and growth by using funds from equity or debt (Investopedia, 2012). Of course, every company must determine its preference on its debt-to-equity ratio and determine which capital structure works best for them. Some approaches to analyzing capital structure are: 1.EBIT – EPS: This analyzes the impact of debt on earnings per share (EPS). Optimizing shareholder’s wealth is the optimum goal and therefore, this approach analyzes the high EPS based on an expected range of earnings before income taxes (EBIT). 2.Valuation: Determines impact of debt use on shareholder’s value by determining the level of debt at which the benefits of increased debt no longer outweigh the increased risks and expenses associated with financing (Wenk, 2012) 3.Cash Flow: Analyzes a firm’s debt capacity by using the weighted average of cost of capital (WACC). The WACC is a calculation of a firm’s cost of capital in which each capital source (bonds, stock and other long-term debt) are proportionally weighted to determine how much interest the company has to pay for every dollar it finances (Investopedia, 2012). Look more:  capital budgeting examples essay Part of Competition Bikes’ (CB) main consideration in the decision to merge or acquire Canadian Biking is working capital. Lets use the EBIT – EPS approach to determine how to maximize shareholder return while minimizing the cost of capital. We currently know Canadian Biking’s moderate sales forecast of EBIT figures for the next 5 years (Year 9 – 13), therefore we can apply the EBIT – EPS approach to choose an optimal capital structure. The total of capital sources in each of the 5 years is $600,000. We will use EBIT – EPS to determine which assortment of bonds*, preferred stock, and common stock is the best option to increase Canadian Biking’s EPS. The five alternative capital structures include: Option 1: 100% Bonds (fully financed) Option 2: 50% Preferred Stock & 50% Common Stock (no bonds) Option 3: 20% Bonds & 80% Common Stock Option 4: 40% Bonds & 60% Common Stock Option 5: 60% Bonds & 40% Common Stock *Annual bond interest rate is 9% After using the EBIT – EPS approach using the forecasted EBIT amounts for Years 9 through 13, we can average the EPS for each of the 5 years to determine which capital structure produced the highest EPS. The EPS averages computed for the capital structure options are: Option 1: Average EPS = .0452 Option 2: Average EPS = .0542 Option 3: Average EPS = .0526 Option 4: Average EPS = .051 Option 5: Average EPS = .0494 Based on the EBIT – EPS approach, the recommended capital structure is option 2, â€Å"50% preferred stock & 50% common stock†. This is the best capital structure mainly because there are two things to consider: 1) long-term debt and associated interest expense and, 2) equity and # of common shares. Option 2 is the best capital structure because there are no bonds and therefore, no interest expense. For example, if we look at option 1 in Year 9, and the bond interest is 9%, then the bond interest expense is $54,000 (.09*600,00). This lowers the income before taxes by $54,000. Although companies can finance debt and use the interest expense deduction to lower their taxable income, it doesn’t make sense for Canadian Bikes to fully finance their capital, because the interest expense costs outweigh the benefit of the tax deduction, resulting in a significant decrease in total income available for common stock. Additionally, because the capital structure consists of 300,000 shares of preferred stock, the company must pay dividends of 5%, reducing the company’s total income available for common stock by $15,000 (.05 * 300,000). Although this reduces the total income available for common stock, the company will maximize its EPS by only having 50% capital in common stock. This reduces the total number of common shares outstanding, which means less shares to divide the total income among. Therefore, Option 2 is the most optimal capital structure that considers minimizing long-term  debt expenses and the optimal number of common shares in order to maximize shareholder return. CAPITAL BUDGETING: Competition Bikes’ is considering building a manufacturing facility in a new Canadian location. The total investment for this project would be $600,000 USD. This consists of $400,000 to build the facility and an additional $200,000 in working capital to support operational costs. The company has projected cash flows over the next five years; therefore we can use cash flow budgeting methods such as net present value (NPV) and Internal Rate of Return (IRR) that consider time value of money for long-term investments (Pearson Education, Inc., 2008). Net present value analyzes the profitability of a project by determining the difference between the present value of the project’s cash inflows and outflows followed by subtracting the initial investment. (Investopedia, 2012). The decision rule applied to NPV is fairly simple, if the NPV is positive, invest; if the difference is negative, do not invest. Competition Bikes applies NPV to forecasted low and moderate sales for the next 5 years. After using the forecasted sales for low demand, the total present value (after subtracting cash outflows from inflows) is $560,719. If we subtract the initial investment of $600,000 from this amount, the NPV is -$39,281. This is a significant warning that the company should not proceed in building a manufacturing facility. On the other hand, if we use the forecasted sales for moderate demand, the total present value is $608,447. If we subtract the initial investment of $600,000, the NPV is $8,447. Therefore a positive NPV indicates the company should proceed with building the manufacturing facility. The biggest concern is determining which NPV to lean towards based on low or moderate sales. Unfortunately, the risk of having low sales outweighs the profitability benefit of having moderate sales. It is too risky for CB to move forward with the investment based on the NPV of low sales (-$39,281). In order for the company to profit from this investment, CB would need to have a moderate sales demand at minimum! The present value in NPV is calculated using an interest rate, also known as the required rate of return. CB’s required rate of return is 10%. When this interest rate is altered or calculated to make the total present value equal to the initial investment, the NPV becomes equal to zero; this is called the internal rate of return (IRR) (Pearson Education, Inc., 2008). The IRR is what a company can expect to earn from investing in the project and the higher the IRR, the more desirable the investment. The calculated IRR for low demand cash flows is 8.2% and the IRR for moderate demand cash flows is 10.4%. Based on these IRR figures, the company should not pursue the capital investment because the average IRR between both low and moderate sales is 9.3%. This is below the company’s required return on capital (hurdle rate) of 10% to pursue a capital investment. Again, the company would need to have a moderate sales demand, at minimum for this capital investment to be profitable and should therefore not pursue building a new manufacturing facility. WORKING CAPITAL: CB must effectively obtain and manage working capital for the expansion of the operation. CB must first look at their operating cycle, cash conversion cycle and free cash flow factors in order to improve production and management of working capital. Let’s discuss the company’s current status of each of the working capital and cash flow factors and determine how the company can improve in these areas. First, the operating cycle involves CB sending the distributor a monthly invoice for all raw materials ordered with terms of net/30 days. This can be improved by renegotiating the payment terms will distributors to net/15 days. This would increase cash flows by improving payment turn around time and accounts receivable collections. Additionally, the company can improve its relations with its distributers to increase effectiveness of its collection process. Another operating cycle factor is ordering and paying for inventory. Currently, the company pays for inventory in the month following production and all inventory ordered for the month is used leaving inventory levels (at the end of each month) at consistent levels. In order  to improve working capital the company should utilize and lower its year ending inventory balance. For example, at the end of Year 8, the company had $91,573 worth of inventory left over. The company should utilize the current inventory on hand before ordering similar raw material items. This will decease cash flows and leave fewer inventories on hand at the end of the year. Currently the average time in inventory is 25 days. This is a substantial turnaround time currently, however in the future, the company can consider replacing labor workers with fixed asset items to improve production time. This will satisfy customer demand by decreasing delivery time and improve cash flows by invoicing customers more frequently than 25 days after production. CB’s cash conversion cycle factors also impact working capital. Currently, the CB’s suppliers invoice at the end of the month for orders that month with terms of net/15. CB does an excellent job of preserving its cash flows by paying the invoices on the 15th of the month following the order.. CB can improve its working capital by negotiating for longer payment terms, i.e. net/30 days, allowing for more time for the company to earn money to pay their invoices. If this is not possible, the company can improve its forecasting measurements for ordering supplies and order the majority of the supplies needed for the month at the beginning of the month. This would increase the amount of time the company has sufficient supplies on hand without having to pay more money, (because the suppliers will still invoice for the orders at the end of the month, regardless of how early in the month the supplies were ordered). This can increase working capital because it acts as a contingency plan, to reduce the likelihood of running out of supplies, avoiding delays, or ordering supplies in excess. Free cash flow factors also affect CB’s working capital. Currently, the company recognizes depreciation in both manufacturing overhead and as depreciation expenses depending on the fixed asset. The company can use their depreciation data to increase management of cash flows by predicting when the company will have to spend a significant amount of money to replace an asset when its useful life expires. This will prepare CB for those unwanted – although necessary – fixed asset costs. Currently the  corporation’s marginal tax rate is 25%. The company can consider obtaining working capital by financing debt. This will leave the company with an interest expense at the end of the year, which is deductible from gross earnings and results in paying lower taxes. After CB improves its working capital, let’s discuss how CB can use its working capital for the lease vs. buy option for a factory building in Canada. CB can use its working capital to cover the $50,000 down payment (or buy out option if they decide to lease) and $200,000 for operational costs of the new factory. According to the data provided for the lease vs. buy option, the lease option will preserve cash outflows of $12,339, (purchase cash outflows are $333,999 and lease cash outflows are $321,660). Therefore, the company should lease the manufacturing facility to preserve cash outflows. Leasing the facility will also allow CB to deduct annual interest payments (6% interest) from the gross earnings to lower their tax payments. This will increase the company’s net earnings at the end of the year, also resulting in higher retained earnings and increased shareholder value. MERGER OR ACQUISITION: CB should consider many factors when deciding to merge or acquire Canadian Biking. Let’s analyze the pros and cons between a merge vs. acquisition and determine what the best move would be for CB. First off, if the company were to merge with Canadian Biking, the potential EPS would increase by approximately .021. This shows potential for increased ownership earnings, but is it significant enough? At the same token, the price/earnings ratio for Canadian Bikes at the end of Year 8 was 9 and CB’s was 70. This shows that CB’s current investors are expecting greater earnings in Year 9 and are willing to pay $70 for $1 of current earnings. This is not the case with Canadian Biking’s investors. Unfortunately a low P/E ratio of 9 indicates that investors are not expecting a significant growth in company earnings. This raises a concern if the merge will result in a potential increase of .021 in EPS. On the other hand, a merge would result in lower costs because CB would not be purchasing Canadian Biking outright. Canadian Biking also has a lower cost competition bike that can decrease production costs and complement CB’s current bike model being offered. This will result in  greater net earnings and cash flows. If the company were to acquire Canadian Bikes, CB can expect a gradual increase in cash inflows over the next 5 years. However, the current offered sales price for Canadian Biking is $286,000; this is 30% more than what the company was valued at, at the end of Year 8. Although CB has enough working capital to make the purchase, it would take 5 years of gradually increasing cash inflows to recoup the price tag of $286,000. This means it could take approximately 5 years, before shareholders saw a significant increase in earnings per share. Based on the pro and cons, CB should merge with Canadian Bikes to lower their production and delivery costs, increase net income, EPS and cash flows, and preserve working capital. The price to acquire Canadian Biking is simply unreasonable based on predicted cash inflows over the next 5 years. The merger will enhance CB’s market position in Canada by having a local distributer to handle all customer orders and provide cost effective and great customer service to the growing Canadian market. References Investopedia. (2012). Capital Structure. Retrieved from http://www.kotzinvaluation.com/articles/capital-structure.htm Investopedia. (2012). Weighted Average Cost of Capital. Retrieved from http://www.investopedia.com/terms/w/wacc.asp#axzz2Azkq4E2V Investopedia. (2012). Net Present Value. Retrieved from http://www.investopedia.com/terms/n/npv.asp#axzz2Azkq4E2 Pearson Education, Inc.. (2008). Horngren Accounting. Retrieved from http://wpscms.pearsoncmg.com/wps/media/objects/6716/6877765/hha08_flash_main.html?chapter=null&page=1042&anchory=null&pstart=null&pend=null Wenk, D. (2012). Using an optimal capital structure in business valuation. Retrieved from http://www.kotzinvaluation.com/articles/capital-structure.htm