Hazards of Chemical Process Scale-up
Dr Daren Tee,Consultancy Group, Hazard Evaluation Laboratory
Dr Simon Waldram, Consultancy Group, Hazard Evaluation Laboratory

Chemical process scale-up of reacting systems necessitates an understanding of the importance of, and inter-relationships between, chemical kinetics, thermodynamics, heat and mass transfer rates, fluid mixing and economics. Some common pitfalls that lead to scale-up problems are discussed in this paper.

This paper is concerned with the type of industrial chemistry that is usually carried out in the liquid phase in batch or semi-batch reactors.

1. Introduction
Most novel chemical synthesis routes are developed on the small scale in the laboratory by synthetic organic chemists. Because, over time, they work on many different processes, the equipment they use tends to be multi-purpose. Their reaction “workhorse” is the well-stirred batch, or semi-batch reactor. This is understandable - and to some extent - even logical, because for many systems it is relatively easy to make this type of reacting system conform closely to the “ideal” assumptions that enable the batch reactor to be modelled by a first order differential equation. Because of this, interpretation of experimental data, and parameter extraction or estimation (eg, determination of orders of reaction and activation energies) is relatively straightforward. Alternative approaches might be to use a Continuous Stirred Tank Reactor (CSTR) or a Plug Flow Reactor (PFR). However, this requires feed vessels and pumps as well as product tanks and a much larger inventory of chemicals is needed. In addition, the implicit assumptions associated with the ideal PFR (perfect radial mixing to give uniformity of composition and temperature within a specific reactor cross section, but no axial mixing between fluid elements that enter the reactor at different times) are surprisingly difficult to achieve in the laboratory: therefore parameter extraction is either more inaccurate (if overly simple models are used to describe the reactor) or more complex (if realistic reactor models are developed.) For these, and other, reasons the idea! Batch reactor continues to be used widely as the preferred process development tool: because of this when scale up occurs there is a tendency to “stick with the familiar” and “off the shelf batch reactors (eg, from manufacturers such as Pfaudler) are the norm in the fine chemical industry.

Often the reactor performance will determine whether a specific process is financially viable. After all, downstream separation duties will depend, sometimes entirely, on whether the desired product is mixed with un-reacted feeds or contaminated with undesired by-products. In the limit one can envisage the perfect reactor being a small-scale unit capable of receiving an imported feed (eg, from stock tanks or a road tanker, etc.) and being capable of delivering a quality product for immediate export without further processing. Rather than continuing to use large “off the shelf reactors, this type of purpose designed, small reactor unit should be our ideal.

2. Predicting Reactor Performance
Probably the greatest hazard associated with process scale-up is a financial one. It occurs when the full-scale process fails to deliver the product at the anticipated rate and with the desired purity. For this reason, in the past, conservative approaches to scale-up have been followed, through several sizes of pilot plant, before full-scale plant capital authorisation is issued. With good modern equipment and appropriate mathematical modelling and parameter extraction this staged scale-up is no longer necessary for many relatively simple processes. There are two common approaches.

2.1 Making Sure that Process Conditions are Scale Invariant: If reacting molecules experience a particular composition-temperature-time (or in the case of continuous flow systems composition-temperature-position) history in a laboratory scale reactor, then reproducing that history on the larger scale should result in an identical product. If this can be achieved then product quality is guaranteed and scale-up has been possible without any detailed understanding of the kinetics or thermodynamics of the reaction(s) involved. This is the tactic that is still adopted, for instance, by many companies involved in the manufacture of pharmaceutical intermediates and products. However, physical processes such as mixing, heat transfer and mass transfer are likely to be involved so changes in the stirrer design, mixing speed or blending of inlet feed streams may have a dramatic effect on product yield. In addition other variables such as details of a gas delivery system, or bubble sizes of gases or liquids, may be of crucial importance. This method of scale-up may be relatively quick but is often risky (financially and from a safety perspective) for all but the simplest processes.

2.2 Following a Rigorous Reactor Design Methodology: This procedure has long been recognised as preferable by the chemical engineering community, but is surprisingly unfamiliar to many chemists: reference 1 was one of the earlier textbooks in this area. The key elements of reactor design are illustrated in Figure 1. The methodology is to gather data for the system of interest and then to integrate that data into a mathematical model (appropriate mass and energy balances) that is developed to describe the full-scale reactor. Solution of the model enables predictions to be made for the yield, selectivity, temperature, etc. that can be expected to result from specific start or inlet conditions. Many commercially available software packages are available to carry out just this function, see for example references 2 and 3, and they contain preprogrammed models of many common types of reactor.

Figure 1 emphasises that rational and fundamental design of reactors is not possible without:
  1. A specific proposal for the reactor type to be used, block 2
  2. An understanding of the chemical kinetics of all the reactions involved, including undesired reactions, block 3
  3. A knowledge of the heat effects associated with each reaction, block 4
  4. An adequate description of the mixing within the reactor or, for a continuous flow system, the flow pattern through the reactor, block 5
  5. A description of the rates of heat and mass transfer that will be present in the system, block 6
In many instances this integrated approach can be simplified very considerably. Thus, for example, for reactions that are essentially thermally neutral an energy balance is not required and for single-phase systems with low heats of reaction, block 6 will be of no consequence in terms of the design equations. For batch reactions with kinetics that are slow compared with the time scale on which good mixing on the molecular scale can be made to occur, essentially no input from block 5 is required if there is only a single reaction. If the reaction kinetics are very fast then instantaneous reaction with a heat output related directly to a feed rate may be a reasonable assumption. In this case, of course, the reactor could in theory be infinitesimally small, but in practice it must still incorporate a heat transfer system capable of removing the steady state enthalpy release, so a finite small size will be required. With all of this information, detailed predictions of reactor performance can be made, the economic robustness of the project can be examined (for instance in relation to changing feedstock prices, energy costs, cooling costs or product price) and rational project management and reactor design decisions can be made. If step changes in the quality of reactor design are to be made then this is the only valid approach. For example for fast reactions, small-scale, high throughput, continuous flow reactors may be possible rather than huge stirred vessels into which a semi-batch feed is slowly trickled. Or, for particularly hazardous reactions, systems with a much reduced reacting inventory may be possible. [4]

Figure 2 emphasises that the approach described in section 2.1 is really the same as that described in section 2.2 but with all the key information missing. For all but the simplest systems, scale-up using this method is therefore based largely on ignorance and trial and error rather than firm principles.

3.Modern Approaches to Process Development
3.1. Hazard Identification and Risk Assessment: Within the European Community unified legislation is becoming the norm. Current UK legislation relating to industrial health and safety requires all companies (employing more than 5 people) to make risk assessments of their activities and to keep records of these. The first stage of the risk assessment is to identify the hazards that may be present: the second stage is to consider the risks that are associated with specific activities bearing in mind the nature of the hazards that have been revealed. A ‘Basis of Safety’ then needs to be defined for a particular activity so that the risks to which personnel and the environment are exposed can be controlled at levels deemed to be acceptable.

When developing a chemical process these procedures necessitate, inter alia, a detailed understanding of the properties of the materials being handled and a knowledge of the reactions that may occur between the chemicals present. The consequences of such reactions (both desired and undesired) must be appreciated and the time scale on which specific hazards may develop (eg, creation of overpressure within equipment) should be known. [5] Such safety considerations should be seamless activities that take place continuously throughout the chemical process lifecycle. At the earliest stages of process development, they may simply take the form of literature surveys, references to databases or websites and some simple hand calculations (e.g. to estimate the enthalpy of reaction or calculate the oxygen balance of key molecules.) Later on, as laboratory work commences, some small-scale safety related screening tests are likely to be required. As the scale of laboratory work continues to grow, more detailed studies of the process in a reaction calorimeter will give a direct measure of the enthalpy release associated with the desired reaction(s). Adiabatic calorimetry will enable the consequences of loss of cooling to be studied and the trajectory of any resulting runaway reaction to be followed. (But be aware that the magnitude of the temperature rise, and the time scale on which this occurs, may be very different on the large-scale: however, reliable ways of allowing for this, experimentally and theoretically, have been developed. [6]) Appropriate adiabatic calorimetry will also reveal whether a thermal runaway from normal reaction conditions is capable of triggering additional exotherms that might be associated, for instance, with decomposition reactions. In most Western European countries, this type of approach is now required by national regulatory authorities. A profoundly important corollary of this requirement is that for most new processes much of the experimental data relating to blocks 3 and 4 in figure 1 will be available. Designing an appropriate reactor from fundamentals therefore becomes an increasingly realistic objective.

3.2. Experimental Design: Time to marketplace is often crucially important with a new product and can have a dramatic effect on cash flow. Process development must therefore be carried out in a near optimal fashion. Statistical experimental design techniques are now widely used: see reference 7 for a general introduction to the field. Software packages to facilitate both the planning of experiments and the interpretation of data are readily available. [8] With modern equipment the emerging possibilities of carrying out multiple experiments in parallel enables large programmes of two-level factorial design experiments to be completed quickly. Key factors (eg, temperature or temperature profile, composition, feed rate, mixing speed, etc.) that affect the product ‘objective function’ (eg, yield, purity, profit margin, etc.) can be rapidly identified and a relatively small number of further experiments enables detailed response surfaces of this objective function to be mapped as a function of these key variables.

4. Process Design
4.1. Anticipating Credible Worst Case Scenarios: The training of chemical engineers, at least in the UK, is brought to a focus in the final stages of their course by the required completion of a detailed design project. Many of the individual strands of the undergraduate course can be integrated and woven into this type of team-based activity. Arguably, we are quite effective at training chemical engineers to design plants for an intended activity. What often receives insufficient attention in undergraduate training is that the plant must also be designed to be adequately safe when unintended activities occur.

4.2. Designing for Both the Desired Reaction and for Worst Credible Maloperation: A formal approach, or several, for identifying hazards is required. HAZOP is one of the most successful and most commonly used. [9] In the context of chemical reactions, the first step is to realise that the reactions that take place are not always the desired ones. [10] Neither are they confined to taking place within reactors. Reactions in plant items such as evaporators, storage tanks, distillation columns or pipelines are not uncommon and may be induced by credible maloperations such as occurred at Bhopal and Castleford. [11,12] This means that the effects of credible maloperations such as contamination, recipe mistakes, temperature errors or mixing problems must be investigated and understood so as to reveal the transient conditions and variable ranges for which the process design must remain safe. Thus, for instance, a reactor pressure relief system cannot be specified until the maloperation that leads to its emergency use has been fully characterised. [13]

5. Some Guideline Rules
5.1. Heat Transfer Duties: Well established methods exist for estimating reaction enthalpies. [14] Good reaction calorimetry experiments allow this system characteristic to be measured directly. They also allow the rate of heat evolution (ie, power output) to be determined as a function of time. Enshrined in such a power versus time, curves are the kinetics of all the reactions taking place. Unravelling such information can be difficult, particularly for multiple reactions, and in this context an in-situ FTIR probe can generate very useful concentration/time data. This in turn permits improved modelling of the global reaction kinetics, and specification of plant scale cooling requirements.

5.2. The Influence of Mixing on Reaction Rates: As noted in reference 1, the problem of mixing of fluids during reaction is important for multiple reactions, fast reactions in homogeneous systems and for all heterogeneous systems. The mixing that occurs on the molecular level is called micromixing whereas the large scale eddies, turbulence and swirls that deliver material from point to point within the reactor are the result of the macromixing. For single-phase systems some of the most important situations in which mixing may influence reaction rates are:

  • Situations in which several competing reactions are present and dependent to a different extent on the concentrations of the individual species present.
  • Situations in which the time scale on which the reactions occur can be fast relative to the timescales on which mixing is complete.
Some guideline rules are given by Sharratt in reference 15 and can be summerised as follows:
  • Where the reaction rate is low there should be no influence of mixing on reaction rate. When scaling up ensure that the macromixing time is less than the reaction time.
  • With fast reactions, there may be a local concentration excess of one reactant due to macromixing being slower than reaction. On scale-up aim for a constant macromixing time. For geometrically similar systems, this means using the same impeller tip speed.
  • For very fast reactions, there may be a local concentration excess of one reactant due to micromixing limitations. On scale-up aim for a constant micromixing time. This means using the same mixing power per unit reactor volume. The dimensionless reaction number is given by tkCAon*\ where t is a reactor characteristic time (mean residence time for a continuous flow system or holding time in a batch reactor), k is the Arrhenius rate coefficient, CAo is the initial (or inlet) concentration of the limiting reaction species and n is the reaction order. This useful group is a ratio of characteristic reactor time to characteristic reaction time: as a general rule, irrespective of the reactor type (ie, mixing pattern) a larger value of UCCAO” ‘ will result in a higher feed conversion, see chapter 5 of reference 1. Conversely a lower value will mean smaller conversions. These and many other aspects of batch and semi-batch reactor design are discussed in references 1 and 16.

5.3. Multiphase Systems: If a reaction requires the presence of more than one phase to proceed at the rate that it does occur at, then it is heterogeneous. Heterogeneous systems require reacting species to be delivered either between phases or to or from surface interfaces (solid-liquid, solid-gas or liquid-liquid). The limiting step in the mechanism for such delivery is usually rate of mass transfer or diffusion, so reaction rates in these circumstances will often be functions of coefficients for these transport processes. These have to be calculated from correlations, that are usually expressed in terms of dimensionless groups and that relate to the specific type of system being used (eg, slurry reactor, bubble column, trickle bed reactor, etc.) The scale-up of these types of reacting system is considerably more difficult than for homogeneous systems and still requires the use of one or more stages of pilot plant development. Systems, in which the reaction rate is controlled by transport processes, rather than intrinsic chemical kinetics, will usually display much lower activation energy.

5.4. The Residence Time Distribution: In continuous flow systems material passing through the system does so with a range of characteristic residence times: this is called the Residence Time Distribution, or RTD, see 7 block 5 of Figure 1. This can be measured using inert flow mixing tracer techniques. In the case of single, first order reactions, the form of the residence time distribution is sufficient to define uniquely the conversion that will be attained. For more complex reactions additional information is required and this can be difficult to extract, see chapter 5 of reference 16 for instance. Not only is the form of the residence time distribution (which characterises the macromixing) required but also the manner in which it occurs (which characterises the micromixing). This can only be achieved by using multiple reacting tracer systems.

5.5. Crystallisation Processes and Ostwald Ripening: Where a solid phase is formed in a reaction process, there is usually a subsequent need for separation, often by filtration. Particle size distribution of the solids can vary with scale with the result that filtration times on the large scale are unacceptably long. If supersaturation during reaction can be developed in a controlled manner, and if both primary and secondary nucleation can also be controlled, then it may be possible to manipulate the size distribution of the solid phase. An alternative is to carry out some post reaction conditioning of the solid phase, and one way of achieving this is to repeatedly cycle the temperature of the reactor contents (solid and liquid); this is called Ostwald ripening. If correct conditions are chosen then in each temperature cycle some of the smaller solids can be dissolved during the heating stage and during subsequent cooling new solids can be persuaded to grow on nuclei of the existing solids. Several such cycles can produce dramatic improvements in processing times. In one instance three heating and cooling cycles between 45 Degree Celsius and 50 Degree Celsius reduced filtration time from 12 to 1.5 hours, washing time from 45 hours to less than 2 hours and drying time from 168 to 66 hours. [17]

5.6. KISS: The most important principle of all is to Keep It Simple, Stupid! (KISS). Always work with experiments, measurements, mathematical models and computer programmes that are suited to the task at hand. “Make everything as simple as possible, but no simpler” is an oft quoted maxim. Evaluation of a large number of unknown parameters in a model may enable a good fit to be made to experimental data, but that does not imply that the model is necessarily of the correct form, or that the parameter values are realistic. A better approach is to determine as many parameters as possible by independent means and then to evaluate the final few by, for instance, non-linear regression. In this way greater confidence can be gained that parameter values have real physical meaning and fall in realistic ranges. Without this approach a multi-parameter fitting exercise may reduce to more than empirical curve fitting.

6. Conclusions
Many processes that looked promising on the small scale in the laboratory have been consigned to the process development graveyard because they failed to meet performance expectations on the larger scale. Such failure is not random, or unlucky, but can be explained in terms of the interactions between mixing, kinetics and heat and mass transfer within the reactor. To understand these phenomena, and to predict performance in particular circumstances, requires detailed mathematical models. Such models, and the system parameters needed within them, can be generated using commonly available equipment and software.

  1. Levenspiel O, Chemical Reaction Engineering, Wiley, 1972
  2. ASPEN software, see http://www.aspentec.com
  3. BATCHCAD software, see http://www.chempute.com/batchcad.htm
  4. Kletz T A, Plant Design for Safety, a User Friendly Approach, Taylor and Francis, 1991, ISBN 156 032 0680
  5. Designing and Operating Safe Chemical Reaction Processes, HSE Books, 2000,HSGI43. ISBN0 7176 10519
  6. Fisher H G, Ed, Emergency Relief System Design Using DIERS Technology, AIChE, 1992, ISBN 0 8169 0568 1
  7. Davies L, Efficiency in Research, Development and Production: the Statistical Design and Analysis of Chemical Experiments, RSC, 1993, ISBN 0 85186 137 7
  8. Design Ease software, see http://www.statease.com
  9. Crawley F et al, HAZOP: Guide to Best Practice, IChemE, 2000, ISBN 0 85295 497 1
  10. Lees F P, Loss Prevention in the Process Industries, 2nd Edition, Butterworth Heinemann, 1996, ISBN 0 7506 1547 8. Appendix 3, Seveso
  11. Lees F P, Loss Prevention in the Process Industries, 2nd Edition, Butterworth Heinemann, 1996, ISBN 0 7506 1547 8. Appendix 3, Bhopal
  12. The Fire at Hickson and Welch Ltd, HSE Books, 1994, ISBN 0 7176 0702X
  13. Etchells J, Wilday J, Workbook for Chemical Reactor Relief System Sizing, HSE Books, 1998, CRR 136/1998, ISBN 0 7176 1389 5
  14. CHETAH software, see http://www.normas.com/ASTM/BOOKS/DS51C.html
  15. Hoyle W, Ed, Pilot Plants and Scale-up of Chemical Processes, Batch Versus Continuous Processing, RSC, 1997, ISBN 0 85404 796 4. Section by Sharratt, P N, page 13
  16. Sharratt P N, Ed, Handbook of Batch Process Design, Blackie Academic and Professional, 1997, ISBN 0 7514 0369 5
  17. HEL application note number 98002.