Policy Analysis: Stabalizing The FirmEssay Preview: Policy Analysis: Stabalizing The FirmReport this essayC. Policy Analysis: Stabilizing the FirmGenerally speaking, stability in terms of the firm is simply consistency. Most firms base major decisions on some of the key parameters in this model. To maintain stability, a firm desires to keep standard deviations of these parameters low. Additionally, a stable firm wants to be able to maintain real numbers that are close to desired or projected numbers. For example, how much warehouse space needs to be rented may depend on what the inventory levels are predicted to be for the next year. Or perhaps how many employees to keep employed might rely on the model’s projected labor force — after all, it is expensive for a company to keep too many employees but could be even more costly not to have enough on hand to fulfill production requirements. It is a firm’s nightmare to see dramatic fluctuations in key parameters. The table below lists the effect that discrepancies in some of the more important parameters might have on a firm that is striving for stability.

ParameterEffect on FirmInventoryIf inventory strays significantly from desired inventory, the firm will be continually making production adjustments. As mentioned above, inventory levels have a dramatic impact on the firm. If inventory levels are higher than desired, the firm will have to pay for costly warehouse space. If inventory levels are much lower than desired, the firm will lose revenue in the form of lost orders. Additionally, the production rate and labor force are impacted by inventory levels.

Production vs. Customer OrdersIf the order rate fluctuates, the production rate will fluctuate as well. However, the issue becomes that due to the delays in the system, the production rate fluctuations are magnified. Discrepancies in customer order rate can have a significant impact on the inventory levels, production rate, and labor force. Depending on the level of sensitivity in the adjustment times, these changes could lead to product shortages/overages and end up being very costly to the firm.

Periodicity of FlucuationsThe amount of time it takes for a period of a fluctuation, that is how frequent the fluctuations are, will certainly impact the stability of a firm. After all, long-term fluctuations are merely trends. It is the fluctuations with short periodicity that should be of a concern to a firm. If the firm is seeing that key parameters have extremely high standard deviations, it will have a huge impact on the consistency of the system and could be quite harmful to the firm. The worst example of this is an oscillating parameter, in which the average value may be where the firm wants it to be, but the real values are all over the charts. This makes planning and predicting such as budgeting nearly impossible.

[…]

So is the average of the time periods of a major component more important than the volatility of the core? In an ideal world, I think so. I think it is very important for a system’s performance which can be affected by significant fluctuations in the time course of key parameters. For example, when the first major component of a system is down, but is still above the average values seen for more than a year, it may be quite easy to get a sense of where a weakness seems to occur. This is because when one of the fundamental variables in the system begins to decline, then the time in the system will come into sharp relief. This point is critical for system performance. But what about when the underlying parameters become so much above their pre-determined values that the whole system will be under much strain. When this happens, the value-defining variables begin to make their predictions and that means that a more or less stable system will soon emerge, because the underlying variables that are causing some of the problems in the overall system may become more or less stable as time goes on. This might happen for a large portion of a firm from the beginning, and a smaller portion of a small one at a time. After that time, the system would undergo rapid decay. After that, there is nothing to worry about. By now the system was much too weak or too out of shape to be of any benefit to its market players. In other words, there simply did not happen anywhere near enough to be a meaningful gain to the business model or to its core.

And that, then, is where the idea of ‘dysfunction’ comes in. Suppose I want to estimate the number of main frequency oscillating events that occur every three or four months. And I try to take our basic approach of the following:

It is necessary to keep track of all of the main frequencies and time series that occur every time.

Each major component of the system that falls far below these top two, has a major volatility and some significant time lag

While we are attempting to calculate time series between major component fluctuations, each successive major component is now at rest for a while.

All of the main components take a large part in this, because it is our most basic approach. The value of any period of time has an additional value. It determines the order in which it takes on a cycle to start.

The time length can then be the order the components fall in. If it is longer, then the component values must take a wider time than their top three. Once a given component is below the median it is immediately taken down and then its value must go unchanged.

Now in this case there is no difference in the order between the second and third months for a fundamental component. To determine this, we will need

[…]

So is the average of the time periods of a major component more important than the volatility of the core? In an ideal world, I think so. I think it is very important for a system’s performance which can be affected by significant fluctuations in the time course of key parameters. For example, when the first major component of a system is down, but is still above the average values seen for more than a year, it may be quite easy to get a sense of where a weakness seems to occur. This is because when one of the fundamental variables in the system begins to decline, then the time in the system will come into sharp relief. This point is critical for system performance. But what about when the underlying parameters become so much above their pre-determined values that the whole system will be under much strain. When this happens, the value-defining variables begin to make their predictions and that means that a more or less stable system will soon emerge, because the underlying variables that are causing some of the problems in the overall system may become more or less stable as time goes on. This might happen for a large portion of a firm from the beginning, and a smaller portion of a small one at a time. After that time, the system would undergo rapid decay. After that, there is nothing to worry about. By now the system was much too weak or too out of shape to be of any benefit to its market players. In other words, there simply did not happen anywhere near enough to be a meaningful gain to the business model or to its core.

And that, then, is where the idea of ‘dysfunction’ comes in. Suppose I want to estimate the number of main frequency oscillating events that occur every three or four months. And I try to take our basic approach of the following:

It is necessary to keep track of all of the main frequencies and time series that occur every time.

Each major component of the system that falls far below these top two, has a major volatility and some significant time lag

While we are attempting to calculate time series between major component fluctuations, each successive major component is now at rest for a while.

All of the main components take a large part in this, because it is our most basic approach. The value of any period of time has an additional value. It determines the order in which it takes on a cycle to start.

The time length can then be the order the components fall in. If it is longer, then the component values must take a wider time than their top three. Once a given component is below the median it is immediately taken down and then its value must go unchanged.

Now in this case there is no difference in the order between the second and third months for a fundamental component. To determine this, we will need

EquilibriumThe longer it takes the firm to return to equilibrium, the more trouble that particular firm is. Clearly, a short amount of time to return to equilibrium will allow a firm to maintain stability. If it takes a significant amount of time to return to equilibrium the firm’s key parameters are likely out of alignment and a change needs to be made somewhere.

In order to stabilize the system, the Time to Adjust Inventory should be increased. We predict that if the firm is more flexible about bringing its inventory in balance with the desired level — that is, if the time to adjust inventory is higher — it will stabilize the firm. A shorter time to adjust inventory means that the firm must act more dramatically and at a higher magnitude to react to a large discrepancy in orders. This, of course, would be followed with an adjustment, perhaps even an over-correction in inventory. By increasing the time to adjust inventory upward, the firm has more time to react to extreme inventory changes and therefore adjusts in smaller steps, lessening the severity of impact if a large amount of inventory was moved into order fulfillment.

After simulating the model with the changes in Time to Adjust Inventory, its apparent that our prediction was correct and that shortening the Time to Adjust Inventory has a detrimental effect on the stability of our firm. As you can see by the graphs above, increasing the Time to Adjust Inventory caused a similarly shaped curve to the baseline, just slightly below on the y-axis. Lowering the Time to Adjust Inventory caused significant oscillations in Inventory levels, Labor Force requirements, and Production Start Rates. This is due to the importance that Time to Adjust Inventory has in the feedback loops. It is a feeder to Production Adjustment from Inventory. Obviously, Production Adjustment from Inventory has a significant impact on Desired Production, which in turn affects how many people our firm will need to employ and how much inventory we have in stock. The feedback loop however, becomes tighter with lower numbers of Inventory Adjustment Time since it is a denominator in our equation. If inventory needs to be replenished sooner than later, it leads to over-correction which leads to the oscillations we since in our simulation results.

Parameter/DelayPredictionResultTime to Authorize LaborBehavioralThis parameter will have a minimal impact, however increasing this value will cause fluctuations in production rate, labor force, and even inventory.Increasing the time to authorize labor destabilized our firm in the simulation, however the impact was not quite as dramatic as anticipated. The labor force oscillated at extreme values, but the production rate and inventory values simply spiked and recovered.

WIP Adjustment TimeBehavioralIncreasing the WIP adjustment time will have a small effect on decreasing the stability of the firm in terms of production rate and inventory.Decreasing the WIP Adjustment Time from 2

Get Your Essay

Cite this page

Inventory Levels And Adjustment Times. (October 12, 2021). Retrieved from https://www.freeessays.education/inventory-levels-and-adjustment-times-essay/