Scientific Articles

in cooperation with:
Università Campus Bio-Medico Roma


Hydrogen Underground Storage : Status of Technology and Perspectives

Author: Carlo Cappellani – Senior Geoscientist

1  Hydrogen Underground Storage : Status of Technology and Perspectives

Hydrogen will play a key role in the development and transformation of future renewable energy systems. H2 has many benefits, can be generated by well-established and emerging technologies and can be used in a variety of end-use energy and transport processes. H2, as a fuel source, has long been identified as a critical step toward a low-carbon, and eventually zero-carbon, energy society. Hydrogen storage is an essential element of an integrated energy system and hydrogen economy. As hydrogen demand and production are growing, underground storage is emerging as a relevant, large-scale solution. While in recent years a lot of attention has mainly been on hydrogen supply and transmission infrastructure, there is the need for underground hydrogen storage to balance and ensure the resilience of a future energy system that relies significantly on renewable energy sources. Hydrogen can be physically underground stored using a method which has already proven its worth and Carbon Geo Sequestration (CGS) and natural gas are essential analogs for H2 storage. Natural gas storage in underground facilities can be dated back to 1916 when it was stored in geological formations. According to many authors, Ontario gas field (Canada) is considered the first successful underground storage project (Taylor et al., 1986). However, certain operational differences (physical and chemical properties) unique to H2 must be acknowledged for effective operation (Iglauer, 2017). Higher demand means there is going to be a need for increased storage capacity and the solution to this challenge is to utilize earth underground reservoirs. Underground reservoirs, such as salt caverns or porous rocks, offer giant capacities to store billions of cubic meters of hydrogen at high-pressures. Although the existence of few Underground Hydrogen Storage (UHS) sites, up till now, little is known about how hydrogen behaves in the subsurface and, current studies are investigating not only how it behaves in the subsurface but also what kind of environment – type of subsurface – would be the right reservoir to store it at a given quantity and scale. Also, to consider challenges of containing hydrogen tiny molecules inside the reservoirs, maintaining its purity, and operating the system within safe mechanical cyclic loading. Considering underground hydrogen storage, an integrated multidisciplinary approach is required, combining several specialists and disciplines (e.g. fluid mechanics and rock mechanics, etc.). Also, integrating laboratory discoveries with numerical modelling will provide solutions to make this technology ready for field deployment within next year   To see more go to full text article    FIG PDF  

Natural Hydrogen: Promising opportunities for Exploration & Production

Author: Carlo Cappellani – Senior Geoscientist

1          Introduction

  The global energy sector is transforming and hydrogen (the most energy-rich gas) is likely to play an increasingly prominent role as a clean energy carrier. Many countries have identified hydrogen as a key pathway to decarbonise their transport, industry processes, heating and energy storage sectors. Hydrogen is almost exclusively manufactured for industrial use, with around 840 Bm3 per year being produced worldwide (Wood Mackenzie 2021). It can be produced artificially via a variety of different pathways and the primary methods for production of hydrogen with low carbon emissions being
  1. water electrolysis using renewable energy (green hydrogen)
  2. steam reformation of natural gas paired with carbon capture and storage (CCS; blue hydrogen)
  3. coal gasification combined with CCS (also blue hydrogen).
  • the majority of produced hydrogen originates from hydrocarbon-based feedstock without CCS (grey hydrogen) since the economics for the electrolytic production of green hydrogen (0.1% of total H2 production) requires improvement (Wood Mackenzie 2021).
  • For a large-scale hydrogen industry to develop, hydrogen storage is key and hydrogen storage in salt caverns is considered the most promising approach for large-scale seasonal storage (HyUnder 2013; Caglayan et al. 2020).
Figure 1 Primary methods for hydrogen production
  To see more go to full text article FIG PDF  

Big Data in Oil and Gas Industry

Author: Elvirosa Brancaccio - Serintel Srl - Rome (Italy)

1          Introduction


Big Data or Big Data analytics refers to a new technology which can be employed to handle large datasets which include six main characteristics of volume, variety, velocity, veracity, value, and complexity.

With the recent advent of data recording sensors in exploration, drilling and production operations, oil and gas industry has become a massive data intensive industry.

Analyzing seismic and micro-seismic data, improving reservoir characterization and simulation, reducing drilling time and increasing drilling safety, optimization of the performance of production pumps, improved petrochemical asset management, improved shipping and transportation, and improved occupational safety are among some of the applications of Big Data in oil and gas industry.

In fact, there are ample opportunities for oil and gas companies to use Big Data to get more oil and gas out of hydrocarbon reservoirs, reduce capital and operational expenses, increase the speed and accuracy of   investment decisions, and improve health and safety while mitigating environmental risks.

big data
Figure 1 Big Data in Oil and Gas Exploration and Production

One of the key enablers of the data-science-driven technologies for the industry is its ability to convert Big Data into “smart” data.  New technologies such as deep learning, cognitive computing, and augmented and virtual reality in general provide a set of tools and techniques to integrate various types of data, quantify uncertainties, identify   hidden   patterns, and   extract useful information enormously reducing the data processing time.  This information is used   to   predict   future   trends, foresee behaviors, and answer questions which are often difficult or even impossible to answer through conventional models.

  To see more go to full text article FIG PDF  

Low Motion Floating Production Storage Offloading (LM-FPSO): Evolution of Offloading Production Systems

Authors: Marco Cocchi – Researcher – Campus Bio-medico University of Rome
                Leone Mazzeo – Researcher – Campus Bio-medico University of  Rome

1          Introduction


Oil & Gas industries have moved in deeper, more remote and technically demanding regions in the last 30 years. With increasing technical complexity of the extraction facility, the fixed cost of the Oil & Gas upstream complex also increases, but in the persistent lower-for-longer price environment there is continuing pressure to develop these fields safely while reducing CAPEX and OPEX costs.

FPSO technology seems to be promising in offering a flexible solution to explore remote Oil fields while in maintaining competitive costs. Nonetheless, Semisubmersible units, SPAR platforms and tension-leg platforms (TLPs) are also common in deepwater regions. TLPs, in particular, find application in up to 1,500m-deep water wells, but FPSO has the advantage to offer the required onboard storage capacity and offloading capability without employing a separate storage vessel or infrastructure.

The high dynamic motion, generated by the rough sea condition to which FPSO units are exposed when operating in remote sea areas, makes the Riser System design more challenging. In fact, it plays a fundamental rule in determining the feasibility of the extraction of hydrocarbons exploiting remote region resources. Thus, the development of a low-motion FPSO enables the utilization of conventional riser systems (such as steel catenary risers and top-tensioned risers). The use of conventional riser technologies, is also able to improve the life-cycle and reliability of a FPSO facility: the realization of a simple and effective installation (by the means of an additional facility structure) that is able to oppose to the high dynamic forces that rough sea environment exerts on the floating structure, is a technological step change, needed to open up less accessible or economically cost-prohibitive fields.

  To see more go to full text article FIG PDF  

The Role of Natural Gas in the Energy Transition Phase

Authors: Marco Cocchi – Researcher – Campus Bio-medico University of Rome
                Leone Mazzeo – Researcher – Campus Bio-medico University of  Rome

1         Introduction


The rapid growth of the world population driven by the development of the industrial sector, have led to an increase of the anthropogenic greenhouse gas emissions. It has been detected an unprecedented, in at least the last 800,000 years, concentration of carbon dioxide (Figure 1‑1) in the atmosphere. Such event, together with other anthropogenic drivers, have been related as the main cause of the phenomena of the “global warming” observed since the mid-20th century.

  Global anthropogenic CO2 emissions
Figure 1‑1 Global anthropogenic CO2 emissions[1].

In order to face the issue raised from the considerations about CO2 concentration, the first worldwide agreement on greenhouse gas emissions was signed in April 2016. The 196 countries responsible for 55% of total CO2 emissions agreed, at the Conference of the Parties in November 2015, to commit to cap global warming at a maximum 1.5°C (referred to the global land-ocean mean surface temperature, GMST), a more challenging target than the 2°C cap originally proposed in the Paris World Climate Conference. Given this commitment, signatory countries need to review their energy strategies in order to reduce emissions by actively promoting low carbon economy policies[2].

Natural gas is a fossil gas mixture consisting mainly of methane (C1). The remainder is heavier hydrocarbons: ethane (C2), propane (C3), isobutane (iC4), n-butane (nC4), and small amounts of heavier components down to C7s. The typical values of the percentage of methane mole fraction in natural gas may vary from 87% up to 97%[3].

Among all the fossil primary energy sources, natural gas presents the highest hydrogen to carbon ration. This characteristic is of extreme importance since leads the following two main properties:

  • The highest lower heating value expressed in MJ/kg respect to all the others fossil fuels. (As described in the picture below[4])
  • The lowest mass of CO2 produced per mass of combustible.
Lower heating value [MJ/kg] for different types of hydrocarbons
Figure 1‑2 Lower heating value [MJ/kg] for different types of hydrocarbons[5].

According to the proprieties described above, natural gas plays a fundamental role in the fight against climate change. The substitution of high carbon content fossil fuels, such as coal, with natural gas, may represent the first step forward the decrease of CO2 emissions.

The main sectors that will immediately benefit of replacing low hydrogen to carbon fuel with methane in terms of CO2 emissions are:

  • Energy production. All the thermo-electric energy plants belong to this sector. They may easily introduce methane as fuel in the burner for the production of high pressure steam. This strategy, adopted already by many companies, reduces CO2 emissions saving operative costs on the post-combustion carbon capture unit.
  • Transportation. On road transportation is already affected by the presence of vehicles fed by methane. In this case engines are designed to host such type of fuel and this constitutes a positive direction for the reduction of CO2

It is clear that the substitution of “conventional” fuel with methane is just a temporary solution, a clever way to “take time” establishing a transition phase, until the worldwide development of the zero-emission (renewable) energy sources will take place.

[1] “Climate Change 2014 Synthesis Report Summary Chapter for Policymakers,” 2014.

To see more go to full text article FIG PDF

Innovation and New Technologies in the Upstream Oil & Gas Industry

Authors: Marco Cocchi – Researcher – Campus Bio-medico University of Rome
                Leone Mazzeo – Researcher – Campus Bio-medico University of  Rome

1          Introduction


Oil & Gas reservoir research and exploration requires the utilization and adaptation of a large number of different technologies spread over numerous engineering fields. Because of the intense resource involved in such operation, the Exploration and Production sector (E&P) results to be a power-demanding field and particular attention should be paid to make it smarter and more efficient.

In the research of technology updates, upstream, as well as downstream, Oil & Gas industry has always been seeking out external innovations even in the field of informatic technologies and robotics.

  Work-class ROVs
Figure 1: Work-class ROVs: the innovative remote-controlled robots for subsea operation[1]

In Figure 1 a work-class ROV (remote operated vehicle) for subsea exploration is reported during its assembly phase. ROVs are made from robotic arms, known as manipulators, a camera, for subsea environment visual analysis, electrical drivers for motion control and batteries or external cables for communication and power delivery. ROVs for exploration were introduced during the ‘70s and represented a significant technology update in their field: thanks to the fact that they can be designed to operate at very high pressure and low temperature conditions, with the respect to human operators, they allowed to discover a high number of new oil fields that previously were thought impossible to be investigated, increasing the opportunities for Oil & Gas companies. The introduction of ROVs also decreased the cost of the exploration operations and, on top of the economics aspect, they increased the safety by substituting and replacing human operators.

ROVs represent also an example of technology transfer from external sectors (in this case the military sector) to upstream Oil & Gas operations. Technologies that come into the Oil & Gas sector often enter into a prolific chain of innovation and become refined commercialized. That was also the case for ROVs, that having been incorporated for years in the Upstream sector, found new application for scientific research in marine biology and they have been used over the years to search for famous shipwrecks and discover new marine species.

In the following paragraphs, some of the most important new technologies in the E&P sector will be presented and discussed.

[1] Source: “”

To see more go to full text article FIG PDF

Current Trends in Artificial Intelligence (AI) Application to Oil and Gas Industry

Authors: Marco Cocchi – Researcher – Campus Bio-medico University of Rome
                Leone Mazzeo – Researcher – Campus Bio-medico University of  Rome

1          Introduction

In recent years, artificial intelligence (AI), in its many integrated flavors from neural networks to genetic optimization to fuzzy logic, has made solid steps toward becoming more accepted in the mainstream of the oil and gas industry.

On the basis of recent developments in the field of Oil & Gas upstream, it is becoming clear that petroleum industry has realized the immense potential offered by intelligent systems. Moreover, with the advent of new sensors that are permanently placed in the wellbore, very large amounts of data that carry important and vital information are now available.

To make the most of these innovative hardware tools, an operator intervention is required to handle the software to process the data in real time. Intelligent systems are the only viable techniques capable of bringing real-time analysis and decision-making power to the new hardware.

An integrated, intelligent software tool must have several important attributes, such as the ability to integrate hard (statistical) and soft (intelligent) computing and to integrate several AI techniques. The most used techniques in the Oil and Gas sector are:

  • Genetic Algorithm (GA), inspired by the biological evolution of species in natural environment, consists of a stochastic algorithm in which three key parameters must be defined:
    1. Chromosomes, or better, vectors constituted by a fixed number of parameters (genes).
    2. A collection of chromosomes called genotype, which represents the individuals of a population.
    3. The operations of selection, mutation, and crossover to produce a population from one generation (parents) to the next (offspring).
  • Fuzzy Logic (FL) is a mathematical tool able to covert crisp (discrete) information as input and to predict the correspondent crisp outlet by means of a knowledge base (database) and a specific reasoning mechanism. To achieve such goal, the crisp information is firstly converted into a continuous (fuzzy) form, secondly processed by an inference engine and at least re-converted to a crisp form.
  • Artificial neural network (ANN) is constituted by a large number simple processing units, characterized by a state of activation, which communicate between them by sending signals of different weight. The overall interaction of the units produces, together with an external input, a processed output. The latter is also responsible of changing the state of activation of the units themselves.

The techniques described above have been adopted in the Oil and Gas field since 1989. Relatively to O&G industry, Figure 1 shows the number of applications of AI.

  fig 1_
Figure 1 Artificial intelligence (AI) applications in the Oil and Gas industry during the years.

In the following sections some of the application of AI in the O&G sector will be analyzed with a particular focus on the Drilling operation (Exploration & Production).

To see more go to full text article FIG PDF

Petroleum Technologies and Sustainability in the Era of Climate Change

Authors: Marco Cocchi – Researcher – Campus Bio-medico University of Rome
                Leone Mazzeo – Researcher – Campus Bio-medico University of  Rome

1          Introduction

The climate change is the biggest challenge that the human kind have ever had to deal with. Despite a residual skepticism on the topic, “climate change is real”[1] and it is already influencing and it will influence the life on Earth.

The cause of climate change is attributed to the significant increase of greenhouse gases (mainly CO2) in the atmosphere, able to trap heat radiating from Earth toward space. By means of the analysis of ice cores[2] it has been discovered that, for millennia, the concentration of carbon dioxide in atmosphere has been below 300 ppm. As it is shown in the Figure 1, such threshold was broken in 1950 and, since then, the concentration of CO2 has never stop growing reaching in 2019 the value of 410 ppm[3].

CO2 capture

Figure 1 Variation of carbon dioxide concentration during millennia estimated from atmospheric samples collected from ice cores3.

According on the considerations mentioned above, the 21st century is indeed recognized as the “era of climate change” mainly characterized by the increase of the land-ocean mean surface temperature (GMST) and, as a consequence, by other environmental phenomena such as the increase of the average sea level and the retreat of glaciers.

The reason why the amount of GHGs in the atmosphere is increasing so rapidly is strictly connected to the growth of the world population driven by the development of the industrial sector. Since the mid-20th century the anthropogenic CO2 emissions have raised exponentially (see Figure 2) in line with the trend detected of the carbon dioxide concentration in atmosphere. On top of this, the human action is identified as the main cause of the global warming.

  lobal anthropogenic CO2 emissions
Figure 2 Global anthropogenic CO2 emissions[4].

The sign of the Paris Agreement (Paris climate conference - COP21, December 2015), the first-ever universal, legally binding global climate change agreement, represents an important act to the fight against the climate changes. Major players of the Oil & Gas and Energy sector are financing the development of sustainable technologies in order to diminish their significant carbon footprint. The actions of mitigation of the emissions of carbon dioxide are mainly directed to the main sources of CO2 which, as shown in the Figure 3, comes from the combustion of coal, oil and gas, and from the operations of flaring and cement production[5].

 CO2 emissions by fuel type

Figure 3 CO2 emissions by fuel type, [5].
[2] To find more:
[4] “Climate Change 2014 Synthesis Report Summary Chapter for Policymakers,” 2014.


To see more go to full text article FIG PDF

Emergency Sea Protection: New Technologies During Oil Spill

Authors: Marco Cocchi – Researcher – Campus Bio-medico University of Rome
                Leone Mazzeo – Researcher – Campus Bio-medico University of Rome

1          Introduction


Every day, hundreds, if not thousands, of oil spills are likely to occur worldwide in many different types of environments, on land, at sea, and in inland freshwater systems.

The spills are coming from the various parts of the oil industry, mainly during:
  • Oil exploration and production activities.
  • Oil transportation in tank ships, pipelines, and railroad tank cars.

The sea environment is particularly subjected to oil pollution. It is estimated that approximately 706 million gallons of waste oil enter the ocean every year[1]. According to the data of oil spills in the United States published by the Environmental Research Consulting (ERC), large spills (over 30 tons), which the 0,1% are incidents, represent the 60% of the total amount of oil spilled. Despite the latter information, 72% of spills are of smaller amount (0.003 to 0.03 ton or less) as shown in (Figure 1‑1).

 Oil exploration and production activities.
Figure 11 Size classes of U.S. marine oil spills, 1990 e 1999 (ERC data) [2].

Naturally, the relatively rare large spill incidents get the most public attention owing to their greater impact and visibility, for this reason it is impossible to measure the entity of damage only considering the size of spillage. Location and oil type are extremely important. Significant efforts have been made to study oil spills after the Exxon Valdez spillage of 1989 (Figure 1‑3). However, such knowledge has not kept pace with the growth of oil and gas development[3]. In 2010, in the Gulf of Mexico, took place the Deepwater Horizon oil spill (Figure 1‑3) considered one of the most catastrophic environmental disasters in human history. In such occasion, over 4.9 million barrels of crude oil were released involving 180,000 km2 of ocean[4].

Timely and highly efficient responses can lead to more hopeful outcomes with less overall damage to the environment. The most used clean response devices and techniques[5] are (Figure 1‑2):

  • Manual recovery, mainly used for costal oil cleanup, involves a team of workers/volunteers using tools like rakes and shovels to collect the oil into buckets and drums for transfer it to a processing station.
  • Booms, mechanical barriers that protect natural resources from spreading crude oil. They are very useful to confine the oil spill facilitating the cleaning operations.
  • Skimmers, mechanical devices designed to remove oil from the water surface without causing changes to its physical or chemical properties and transfer it to storage tanks. Skimmers are usually used together with booms.
  • Sorbents, materials that can soak up oil from the water by either absorption or adsorption.
  • In situ burning. It is a cleaning technique which consists in a controlled burning of the oil that takes place at, or near, the spill site.
  • Dispersants are chemical spill treating agents, similar to emulsifiers, that accelerate the breakdown of oil into small droplets that “disperse” throughout the water. Dispersants are used to reduce the impact to the shoreline and to promote biodegradation of oil.
  • Bioremediation. It consists of the introduction of a microbial population (bio-augmentation) together with nutrients (bio-stimulation), to enhance the rate of oil biological degradation.

etection and monitoring of oil spillage

Figure 12 A visual overview of all the oil spill response techniques[6].
The detection and monitoring of oil spillage are of fundamental importance to perform a rapid response. Innovations on sea protection involve, in fact, both oil spill monitoring and response techniques. detection and monitoring of oil spillage
Figure 13 BP Deepwater Horizon blowout 2010 (left), Exxon Valdez spillage (right)[7]-[8].
[2] D. Schmidt-etkin, Spill Occurrences: A World Overview. D.S. Etkin, 2011.
[3] Li, P., Cai, Q., Lin, W., Chen, B., & Zhang, B. (2016). Offshore oil spill response practices and emerging challenges. Marine Pollution Bulletin, 110, 6–27.
[4] Griggs, J. W. (2011). BP Gulf of Mexico oil spill. Energy Law Journal, 32, 57.
[5] B. Chen, X. Ye, B. Zhang, L. Jing, and K. Lee, Marine Oil Spills — Preparedness and Countermeasures, Second Edition. Elsevier Ltd., 2019.
[6] F. Mapelli et al., “Biotechnologies for Marine Oil Spill Cleanup: Indissoluble Ties with Microorganisms,” Trends Biotechnol., vol. xx, pp. 1–11, 2017.

To see more go to full text article FIG PDF


Hydrogen Role on the Decarbonization Transition Route

Author: Elvirosa Brancaccio - Serintel Srl - Rome (Italy)

1.     Introduction

Awareness of climate change impacts and the need for deep decarbonization has increased greatly in recent years. In 2018 the EU published its vision for the future of energy in Europe ‘A Clean Planet for All’   which aims at creating a “prosperous, modern, competitive and climate neutral economy by 2050.”  A set of pathways has been developed and assessed that rely heavily on renewable energy and energy efficiency, with a role for natural gas and hydrogen.

The need to accelerate clean energy transitions is underscored by recent data: CO2 emissions rose for a second year in a row in 2018 to reach a record high.


Figure 1 Annual change in global energy-related CO2 emissions, 2014-2018[1]

In response to this growing awareness and the urgency of decarbonization, policy makers have taken action and in 2015 agreed to what is known as the Paris agreement.  This has set the target to limit the expected global average temperature increase to significantly less than 2°C, with the ambition to keep to the limit to less than 1.5°C. In order to achieve such necessary and ambitious targets, the European economy, and in particular the energy sector, needs to significantly reduce CO2 emissions to a fraction of current levels (e.g. -80%, -95%) with a growing consensus that net zero emissions will be required.  Many changes will be required in how we work, travel, heat our homes and how we obtain the energy necessary to carry out all these activities, as shown in Figure 2.

Figure 2 The scale of Europe's decarbonisation challenge – emissions by sector (MtCO2e)[2]
  Hydrogen can help overcome many difficult energy challenges:
  • Integrate more renewables, including by enhancing storage options & tapping their full potential
  • Decarbonize hard-to-abate sectors – steel, chemicals, trucks, ships & planes
  • Enhance energy security by diversifying the fuel mix & providing flexibility to balance grids
Either if there are challenges:
  • costs need to fall;
  • infrastructure needs to be developed;
  • cleaner hydrogen is needed;
  • regulatory barriers persist.[3]

A key feature of hydrogen is its ability to act as both a source of clean energy (for a variety of uses), and an energy carrier for storage. Hydrogen can be transported through existing pipelines, mixed with natural gas, and through dedicated pipelines in the future. It offers an energy storage solution that costs ten times less than batteries.

Hydrogen is already widely used for industrial purposes across the steel, petrochemical and food sectors, but it is now also being used in mobility. In the future, it could also replace natural gas to heat residential and commercial buildings. Hydrogen can also be transformed into clean electricity by injecting it into fuel cells.

The most interesting thing about hydrogen, is that it does not generate carbon dioxide emissions or other climate-changing gases, nor does it produce emissions that are harmful for humans and the environment. For this reason, it will play a key role in ensuring that European and global decarbonisation objectives are achieved by 2050.[4]

Low-carbon hydrogen from fossil fuels is produced at commercial scale today, with more plants planned. It is an opportunity to reduce emissions from refining and industry.

  CO2 capture
  Figure 3 Hydrogen production with COcapture is coming online[5].
[1] IEA 2019
[2] Source: 2016 National Inventory Submissions (Common Reporting Format) for EU, Norway and Switzerland Note: Transport here refers to ground-based transport.  Aviation and waterborne transport are counted towards the ‘Other’ segment
[3] IEA, 2019
[5] Keith Scott, Chapter 1: Introduction to Electrolysis, Electrolysers and Hydrogen Production, in Electrochemical Methods for Hydrogen Production, 2019, pp. 1-27 DOI: 10.1039/9781788016049-00001 eISBN: 978-1-78801-604-9

To see more go to full text article FIG PDF


The Green Chemistry

Authors: Marco Cocchi – Researcher – Campus Bio-medico University of Rome
                 Leone Mazzeo – Researcher – Campus Bio-medico University of Rome

1          Introduction and Principles


An early conception of “green chemistry was developed in 1990 by P. Anastos and J. Warner[1] trough 12 principles ranging from prevention and atom economy to pollution prevention and an inherently safer chemistry. These principles, described below, offer a protocol to adhere in developing novel chemical processes.

  1. Waste prevention: by prevent waste production, rather than clean up and treat wastes after having produced. Plan to minimize waste at every process’ stage.
  1. Atom economy: reduce waste by recycling the number of atoms from all reagents that are incorporated into the final product. Use atom recycling concept in order to evaluate reaction efficiency.
  1. Less hazardous chemical synthesis: design chemical reactions’ path in order to be as safe as possible. Consider the hazards of all substances handled during each single step of the reaction, including waste.
  1. Designing safer chemicals: minimize toxicity directly by proper design. Predict and analyze factors such as physical properties, toxicity, and environmental impact of each designed process’ step.
  1. Safer solvents & auxiliaries: look for the safest solvent available for any given step. Optimize the total amount of solvents and auxiliary substances used in order to minimize the waste produced.
  1. Design for energy efficiency: find the least energy-intensive chemical route, thus reducing heating and cooling, as well as pressurized and vacuum conditions (i.e. try to stay as close as possible to ambient temperature & pressure).
  1. Use of renewable feedstocks: use feeds which are made from renewable (i.e. bio-based) sources, rather than other chemicals made from petrochemical products.
  1. Reduce derivatives: minimize the use of temporary derivatives such as protecting groups in order to reduce the waste production.
  1. Catalysis: Look for catalysts that help to increase selectivity, minimize waste, reduce reaction times and increase energy efficiency.
  1. Design for degradation: design products that can degrade themselves easily into the environment. Ensure that both original and degraded products are not toxic, bio-accumulative, or environmentally persistent.
  1. Real-time pollution prevention: real time control of chemical reactions to prevent the formation and the release of any potentially hazardous or polluting products into the environment.
  1. Safer chemistry for accident prevention: developing chemical processes and procedures that are safer to inherently minimize the risk of accidents. Evaluate all the potential risks and assess them beforehand.

Today, more than 98% of all products and materials needed for modern economies is still derived from petroleum and/or natural gas, generating substantial quantities of wastes and emissions.

An exaggerated, but illustrative, view of twentieth century chemical manufacturing can be written as a recipe[2]:

  • Start with a petroleum-based feedstock.
  • Dissolve it in a solvent.
  • Add a reagent.
  • React to form an intermediate chemical.
  • Repeat (2)–(4) several times until the final product is obtained; discard all waste and spent reagent; recycle solvent where economically viable.
  • Transport the product worldwide, often for long term storage.
  • Release the product into the ecosystem without proper evaluation of its long-term effects.

The recipe for the twenty-first century will be very different:

  • Design the molecule to have minimal impact on the environment (short residence time, biodegradability).
  • Manufacture from a renewable feedstock (e.g. carbohydrate).
  • Use a long-life catalyst.
  • Use no solvent or a totally recyclable solvent.
  • Use the smallest possible number of steps in the synthesis.
  • Manufacture the product as required and as close as possible to where it is required.

A typical example of the twentieth century chemical manufacturing production model is represented by plastic materials, which are also a typical example of linear economy: no-renewable resources, oil or ethane in this case, are used to produce plastic materials, which at the end of life become wastes and dispersed into environment. Today, some about 8 million of metric tons escapes into the world’s oceans each year[3], most of it from countries in South East Asia, where plastics use has outplaced waste management infrastructure and the situation is approaching catastrophic proportions.

The green chemistry approach is the correct way to deal with the actual environmental situation, representing a promising strategy of future economic development also for industrialized countries.

Paul Anastas, then of EPA, and John C. Warner developed the Principles of Green Chemistry (Figure 1), which help explain what the definition means in practice. The principles cover such concepts as:

  • Designing processes to maximize the amount of raw material that ends up in the product.
  • Using safe, environmentally-benign substances, including solvents, whenever possible.
  • Designing energy-efficient processes.
  • Using the best form of waste disposal: not creating it in the first place.
  geen chemistry  
Figure 1: Principles of Green Chemistry
[1] P. T. Anastas, J. C. Warner, The Twelve Principles of Green Chemistry, Oxford Univ. Press, Oxford – UK (1998).
[2] Based on: Woodhouse, E. J. Social Reconstruction of a Technoscience? : The Greening of Chemistry.
[3] A.H. Tullo, Fighting ocean plastics at the source. Chem. & Eng. News, 96 (16) (2018) 29-34

To see more go to full text article FIG PDF

Floating LNG (FLNG) Technical Challenges and Future Trends

Authors: Marco Cocchi – Researcher – Campus Bio-medico University of Rome

1          Introduction

Natural gas (NG) and liquefied NG (LNG), which is one trade type of NG, have attracted great attention because their use may alleviate rising concerns about environmental pollution produced by other fossil fuels as coal and oil.

In the figure below, the typical components of NG are reported giving also the idea of their relative amount:   NG
Figure 1: Natural gas composition[1]

There are two main distinctions in between the final products obtained from gas processing: Pure natural gas liquids, meaning that at least 90% of the liquid contains ONE type of primary molecule, as:

  • Ethane
  • Propane
  • Normal Butane
  • Isobutane
Mixed natural gas liquids, meaning that the liquid contains at least two different types of primary molecules, are:
  • Ethane/Propane (EP) Mix
  • Natural Gasoline

NG reserves may locate in embedded underground areas and a significant portion of the reserve is often located off-shore. The off-shore extraction of NG and its conversion in liquified NG has reached a turning point in terms of economic feasibility; in fact, just few years ago, that extraction type was thought to be:

  • Environmentally unsafe, due to the lack in LNG off-shore previous practice
  • Particularly expensive, due to the installation of long subsea NG pipelines

As a result, there are many efforts to excavate and monetize these stranded and offshore reserves with floating facilities where offshore liquefaction of NG is possible. Therefore, the development of floating LNG (FLNG) technology is becoming important.

Natural gas off-shore facilities as FLNG represent a very complex condensate of chemical plant technologies, designed to be installed in limited space conditions on dynamic moving vessels.

Space limitation of floating vessels is indeed a challengeable problem to overcome. Due to this reason, the amount of feed gas that can be reserved for floating liquefaction is restricted. Units for gas pretreatment operation are supposed to occupy about 50% of the available deck space of a floating production facility, although this relies on the impurity level in the feed gas stream. This indicates that FLNG is more suited to feed gas streams including low levels of inert gases and impurities. CO2, hydrogen sulfide, nitrogen, mercury, and acid gases are the main impurities determining the amount of feed gas.


To see more go to full text article FIG PDF

Supercritical Geothermal Resources: Exploration and Development

Author: Elvirosa Brancaccio - Serintel Srl - Rome (Italy)

1.     Introduction

The demand for clean, renewable energy is continuing to increase around the world.  Much of that demand is being met with wind and solar power, but these resources are intermittent and therefore require balancing. Presently, developed geothermal resources are not adequate to provide the balancing that will be needed in the future thus attention is turning to supercritical geothermal resources.

Supercritical Fluid
Figure 1 Iceland Deep Drilling Project[1]

Utilizing supercritical fluids, geothermal could play an important role for carbon-zero energy future. These supercritical fluids provide much higher temperatures above 374 °C and pressure points above 22 MPa, providing much higher heat-content and lower density and so have the potential to generate around 10 times more energy than conventional geothermal for the same amount of extracted fluid [2].

Volcanic geothermal systems are associated with magmatic intrusions in the upper part of the Earth’s crust characterized by increased temperature, specific fluid enthalpy, and convection of groundwater. Conventional exploitation of geothermal fluids from such systems typically produces an average of about 3-5 MW electric power per well with a world total exploitation of geothermal energy in 2018 corresponding to about 14.4GW [3]. Conductive heat transfer from a magmatic intrusion to the surrounding groundwater occurs in the roots of the geothermal system below the depth of typical conventional geothermal wells. Recent modelling suggests that supercritical fluids with temperatures and enthalpies exceeding 400°C and 3000 kJ kg-1, respectively, exist at the boundary between geothermal systems and the magmatic heat source, with such fluids possibly capable of generating up to 30-50 MW of electricity from a single well or ten times more than conventional geothermal wells.

[3] A. Richter, Global geothermal capacity reaches 14,369 MW top 10 geothermal countries, Oct 2018, Think GeoEnergy ­ Geothermal Energy News, 2018.

To see more go to full text article FIG PDF

Energy Storage Using Thermal Processes and Nanotubes

Author: Marcello Pompa - Industrial Engineering - University "Campus Bio-Medico" of Rome

1. Theme description

Since 1970, the science had tried to find a solution at the energy crisis, developing new method to use and storage renewable energy[1].

The United States Department of Energy has expected that the world’s energy consumption will be increased by 20% and that overuse fossil fuels will have a hard impact on climate[2].

The hardest current global challenge is to use the renewable energy rather than fossil fuels, improving the storage energy efficiency[3].

One of the most interesting technologies in the energy storage and conversion is the nanostructured materials for their mechanical and electrical properties[4].

Carbon nanotubes (CNTs) are a kind of nanostructured material with very good electrical and mechanical properties thanks to their dimension and surface properties. Carbon nanotubes were discovered in 1991 as a minor byproduct of fullerene synthesis[5]. The research into CNTs has increased, reducing significantly the cost of this technology and improving the processability and scalability[6]. Nanotubes discovered are of two types: single-wall and multiwall.

In the following, an overview the thermal processes to store energy, in particular the using of Carbon nanotubes in energy field (with a description of this technology and a presentation of the major results obtained by CNTs) are reported.

[2] Shukla, A. K. S. S., &Vijayamohanan, K. (2000). Electrochemical supercapacitors: En‐ ergy storage beyond batteries. Current Science, 79.
[3] Arico, A. S., et al. (2005). Nanostructured materials for advanced energy conversion and storage devices. Nat Mater, 4(5), 366-377.
[4] Chung, J., et al. (2004). Toward Large-Scale Integration of Carbon Nanotubes. Lang‐ muir, 20(8), 3011-3017.
[5] Lijima, S. Nature 1991, 354, 56-57.
[6] Sherman, L. M. (2007). Carbon Nanotubes Lots of Potential--If the Price is Right. 01/05/12]; Available from:,‐ tentialif-the-price-is-right.

To see more go to full text article FIG PDF


Intelligent Clothing to Improve Safety at Work and Support Production

Author: Marcello Pompa - Industrial Engineering - University "Campus Bio-Medico" of Rome
  1. Theme description

In order to reduce costs, to improve worker productivity, some companies are driving the development of smart wearables and sensors in industrial environments[1].

Currently, the safety on work is guaranteed through PPE (personal protective equipment) like safety eyewear and other. The technology upgrades could make the standard do an even better[2].

Examples of possible Wearable technology that can greatly improve workplace safety are[3]:

  • Smart bands and sensors embedded in clothing and gear that monitor workers’ health and wellbeing by tracking factors such as heartrate, heat stress, respiration, fatigue and exposure. The data obtained could be sent to workers when critical levels are reached;
  • In case of dangerous environments, machine and environmental sensors that provide contextual information to field workers to help them to know from what they are surrounded and wearable GPS tracking to help to know their spatial position;
  • Smart glasses and other HUDs (Head-Up Display) that allow workers to access specific instructions and manuals in the field, in addition to allow remote guidance;
  • In the insurance sector, clothing with camera-equipped could be used to document a job or incident for later review.

In the following, a review based on intelligent clothing, with future developments, are reported.


To see more go to full text article FIG PDF


Smart Grids

Author: Marcello Pompa - Industrial Engineering - University "Campus Bio-Medico" of Rome
  1. Theme description


Energy systems are changing fast. The methods to produce energy and the ways to transmit it are changing. The consumption of electrical energy is growing and its generation is becoming more decentralized, with grid management increasingly complex[1].

With the objective to overcome the weaknesses of conventional electrical grids, the Smart Grid was introduced. A Smart Grid is an electricity network based on two-way digital communication. This system allows for analysis, monitoring, communication and control with the aim to improve efficiency and reduce energy consumption and cost[2].

The Smart Grid has the opportunity to move the energy industry into a future more reliability, efficiency, and availability, allowing an improve of environmental health. During this period, it will be critical to carry out technology improvements, study, consumer education and standard regulations to ensure the benefits of the Smart Grid. The advantages of the Smart Grids are[3]:

  • Slower time of restoration of electricity after power disturbances;
  • Improve the transmission efficiency;
  • Reduce costs;
  • Increased integration of large-scale system based on renewable energy;
  • Improved security
  • useful to use the plug-in hybrid technology for electric vehicles[4].

In the following, a review based on smart grid, with example of installation and future development, are reported.


To see more go to full text article FIG PDF

Solar: paper like cells

Author: Marcello Pompa - Industrial Engineering - University "Campus Bio-Medico" of Rome
  1. Theme description


There is a significant interest for the production of renewable energy. The researchers try every day to find or improve methods to produce green energy. One of the best renewable energy is the solar energy: available every day (though discontinuously)[1].

A new system to capture and use the solar energy is 3PV (printed paper photovoltaics)[2]. This technology uses an ink with electrical properties to print on a lot of materials (paper too) an advanced system of solar cell[3].

The 3PV is developed and study for the first time by the MIT researchers in 2011[4].

This new technology could be incorporated into clothing, accessories and etc. opening the ways to new method to use the solar energy[5]. The printed cells are flexible so it could be use in documents, windows, wall coverings, etc. adapting its form. Furthermore, this cheap technology could lead to produce new solar system in rural areas, needing reliable source of electricity.

The efficiency of the 3PV started in 2011 with 1%, reaching now about the 20%[6].

Additionally, the power-to-weight ratio of this technology is among the highest ever achieved: it is more efficient than common photovoltaic cells on glass substrates.

In the following, an overview of 3PV and the major results obtained by this technology until now are reported.



To see more go to full text article FIG PDF

Smart Fluid

Author: Marcello Pompa - Industrial Engineering - University "Campus Bio-Medico" of Rome
  1. Theme description 


A smart fluid, also called electro rheological fluid[1], is a liquid suspension of metals or zeolites which solidifies when electric current is applied to it, becoming fluid again when the current is removed.

Smart fluids can be divided in four main classes:

  • electro-rheological (ER) fluids[2];
  • magneto-rheological (MR) fluids[3];
  • magneto rheological elastomer (MRE) fluids[4];
  • electro-conjugate liquids[5].

Since 1960, the engineers tried to develop new devices based on ER smart fluids (vibration damper, flow control waves, etc.), without important results. The turning point was there in 1990, after the discovered of MR smart fluid: indeed, in 2002, suspension damping struts of the Cadillac Seville STS model automobile (based on smart fluids) was discovered[6].

The interest for this kind of technology is considerable and the perspective for a new device based on smart fluids is real.

In the following, a review on smart fluids, with future developments in the close future, is reported.


[2]w. m. winslow: J. Appl. Phys., 1949, 20, 1137 – 1140
[3] j. rabinow: AIEE Trans., 1948, 67, 1308 – 1315
[4] B. X. Ju, M. Yu, J. Fu, Q. Yang, X. Q. Liu, and X. Zheng, “A novel porous magnetorheological elastomer: preparation andevaluation,” Smart Materials and Structures, vol. 21, no. 3, Article ID 035001, 2012
[5] W.-S. Seo, K. Yoshida, S. Yokota, and K. Edamura, “A high performance planar pump using electro-conjugate fluid with improved electrode patterns,” Sensors and Actuators A: Physical, vol. 134, no. 2, pp. 606–614, 2007
[6] r. stanway: Mater. World, February 2002, 10 – 12

To see more go to full text article FIG PDF

Desulfurization from Gas Oil: sulfur removal of gas oil to 10 ppm

Author: Vincenzo Piemonte - Associate Professor - University "Campus Bio-Medico" of Rome
  1. Introduction

The source of energy most used in the world is crude oil. Major portions of the crude oils are used as transportation fuels such as diesel, gasoline and jet fuel. However, the crude oil contains sulfur, typically in the form of organic sulfur compounds. The sulfur content and the API gravity are the properties that have more influence on the value of the crude oil. The sulfur content is expressed as a percentage of sulfur by weight and varies from less than 0.1% to greater than 5% depending on the type and source of crude oils[1].

The removal of organo-sulfur compounds (ORS) from diesel fuel is the key to reduce air pollution, reducing the emission of toxic gases (such as sulfur oxides) and other polluted materials. The adsorption desulfurization process is one of the easily and fast method to remove sulfur from diesel oils[2].

The adsorptive desulphurization of gasoline over nickel based adsorbent, provide high capacity and selectivity for the adsorptive desulfurization of gasoline. The adsorption involves C-S bond cleavage as evidenced, forming ethyl benzene from benzothiophene in the absence of hydrogen gas.

The hydrodesulfurized straight run gas oil having less than 50 ppm sulfur is treated with activated carbon fiber to attain the ultra-low sulfur gas oil having less than 10 ppm sulfur, for example.

The next paragraphs describe the desulphurization of gasoline with some of the used methods.

[1]Desulfurizati on of Gasoline and Diesel Fuels, Using Non-Hydrogen Consuming Techniques, Abdullah Al-Malki, King Fahad University of Petroleum and Minerals, October 2004
[2]Adsorption Process of Sulfur Removal from Diesel Using Sorbent Materials, Isam A. H. Al Zubaidy, Fatma Bin Tarsh, Noora Naif Darwish, Balsam Sweidan Sana Abdul Majeed, Aysha Al Sharafi, and Lamis Abu Chacra, Journal of Clean Energy Technologies, Vol1, No. 1, January 2013

To see more go to full text article FIG PDF

Applications of robotic technologies in the upstream and downstream sector

Author: Giovanni Franchi-Chemical Engineer – PhD Student –University UCBM – Rome (Italy)
  1. Theme Description

According to the 2017 edition of the BP Energy Outlook the world economy will double over the next 20 years with an annual growth of 3,4% drive by China and India. Oil, gas and carbon will account for more than 75% of energy supplies in 2035, despite of the use of renewable resources will increase. In this context gas will overtake coal becoming the second fuels source in 2035 with an annual growth of 1,6 %.[1]Focus on oil demand, it reached 94,4 Mbbl/day in 2015 and it is expected to overtake 100 Mbbl/day in 2021.[2]Therefore oil companies have started to explore new unconventional reservoirs such as tight and heavy oil, shale gas etc. with the aim to increase the production.[3] However these new oilfields are in desert, artic, deep water zones and require specific technologies to be extracted. In last fifty years several accidents occurred such as Exxon Valdez oil spill in 1989[4]or Deepwater Horizon oil spill in 2010[5].  In this scenario robotic technologies can have a key role in increasing safety, efficiency, productivity and minimize risks. Therefore, in the following sections their applications in the oil and gas sectors are described.

Figure 1 - Energy Consumption from 1965 up to 2035.

To see more go to full text article FIG PDF


The Contribute of Digital Technologies for the Oil and Gas Industry

Author: Giovanni Franchi-Chemical Engineer – PhD Student –University UCBM – Rome (Italy)

1. Theme Description

The IEA estimated, in the“Medium-Term Oil Market Report 2016”, that oil demand will increase from 94.4Mbbl/day in 2015 up to 101.6 Mbbl/day in 2021 with a mean annual growth of 1.2% dragged by Asia and Middle East.[1]However, in last ten years the cost of productions have increased by about 60%, while oil prices fell down.[2] For example, referring to OPEC oil prices decreased from 109.45 US$/bbl in 2012 to 40.68 US$/bblin 2016.[3] In this scenario digital technologies can have a pivotal role in reducing costs and risks, increase production and efficient of operations. McKinsey&Company, indeed, argued that digital technologies could reduce capital expenditures of about 20%, operating costs of 3-5% in upstream and of about 50% in downstream.[4]Moreover, digitalization could create, in the next ten years, about 1trillion dollars for the sector of which 580-600 billion for upstream, 100 billion for midstream and 260-275 billion for downstream. Furthermore, it could improve productivity by about 10 billion dollars, reduce water usage and emissions by 30 and 430 billion dollars respectively and save 170 billion dollars for customers.[5] Therefore in the following sections, the main digital technologies and the digital oilfield are described.

[2] H. Hassani, The role of innovation and technology in sustaining the petroleum and petrochemical industry, Technological Forecasting and Social Change, 2017, 119, pp. 1-17.

To see more go to full text article FIG PDF

Hydrocracking: converting Vacuum Residue in Naphtha and Diesel

Marcello De Falco – Associate Professor - University "Campus Bio-Medico" of Rome.
Mauro Capocelli - Researcher - University "Campus Bio-Medico" of Rome.
  1. Theme description

In the refinery sector, both the fuel and the feedstock market as well as the more stringent environmental regulations are exacerbating the need of maximizing the residue conversion to distillates. In particular, while the distillate fuel demand (gasoline, diesel) is still increasing, the demand of residue fuel oils is about to fall sharply.

Compared with traditional technologies, the present refineries face several challenges because of the presence of crude oils characterized by high content of aromatics, acids, metals and nitrogen, therefore putting more pressure on the hydrocracking and hydrotreating processes that have to handle a low quality feedstock without significant loss of yield or efficiency[1].

The Hydrocracking (HC) process is able to remove the undesirable aromatic compounds from petroleum stocks producing cleaner fuels and more effective lubricants. In other words, the main application is to upgrade vacuum gas oil alone or blended with other feedstocks (light-cycle oil, deasphalted oil, visbreaker of coker-gas oil) producing intermediate distillates (naphta, jet and diesel fuels), low-sulfur oil and extra-quality FCC feed. HC works by the addition of hydrogen and by promoting the cracking of the heavy fractions in lighter products. With reference to Figure 1, HC globally involves the catalytic cracking (end other micsplitting of a C-C bond) and the addition of hydrogen to the C = C bond (exothermic).

Reactions of cracking and hydrogen addition during hydrocracking
Figure 1: Reactions of cracking and hydrogen addition during hydrocracking

To see more go to full text article FIG PDF


Producing Catalyst such as Methanol Synthesis Catalyst, Ziegler Natta Catalyst, Ammonia Catalyst

Author: Marcello De Falco, Associate Professor, University UCBM – Rome (Italy)

1.Theme Description

Catalysts are substances used to speed-up chemical reactions or to selectively drive the desired reaction to promote maximum efficiency. They can be homogeneous or heterogeneous, that is they can be in the same aggregation state of one or more reagents or not. Focusing  the attention on heterogeneous solid state catalysts, which are largely the most applied, they are generally shaped bodies of various forms, as rings (being Rashig rings the most diffused, refer to Figure 1), spheres, tablets and pellets and their performance is measured according to indices as:

  • activity (rate with which a chemical reaction proceeds towards equilibrium in the presence of the catalyst);
  • selectivity (the ratio between the rate of the desired reaction to the rate of the secondary undesired reactions);
  • specific surface area per cubic meter or kilogram;
  • diffusivity which the ability to diffuse reagents and products within the catalyst structure.
Fig.1Fig. 1 - Rashig rings

To see more go to full text article FIG PDF


GTL: Small Scale and Modular Technologies for Gas to Liquid Industry

Author: Elvirosa Brancaccio - Serintel Srl – Rome (Italy)


1. Introduction

Gas-to-liquids (GTL) is a technology that enables the production of clean-burning diesel fuel, liquid petroleum gas, base oil and naphtha from natural gas. The GTL process transforms natural gas into very clean diesel fuel because products are colorless and odorless hydrocarbons with very low level of impurities.

Much of the world’s natural gas is classified as “stranded,” meaning it is located in a remote area, far from existing pipeline infrastructure. The volumes often are too small to make constructing a large-scale treatment gas plant cost-effective.  As a result, the gas is typically re-injected into the reservoir, left in the ground, or flared, which is harmful to the environment. However, the availability of this low cost, stranded gas has incentivized companies to develop innovative technologies that can economically and efficiently utilize this gas converting it into a transportation fuel like diesel and jet fuel.

Refineries can also use GTL to convert some of their gaseous hydrocarbon waste products into valuable fuel oil which can be used to generate income.

Small-scale GTL plants are containerized units comprised of a reformer for synthesis gas production, a Fischer Tropsch (FT) reactor for syncrude production, and, in some cases, an upgrading package, which is used to further refine the FT products into the desired transportable fuel.  Since these containerized units already have about 70 percent of their construction complete before reaching the plant site, on-site construction costs are significantly reduced.  In cases where capacity needs to be increased, additional units can be easily shipped via truck or ship and connected in parallel to the existing process.  Depending on the technology, capacity can range anywhere from 100 barrels per day (bpd) to 15,000 bpd.


2. GTL Process Phases

Fischer-Tropsch is the process of chemical converting natural gas into liquids (GTL), coal to liquids (CTL), biomass to liquids (BTL) or bitumen from oil sands to liquids (OTL).

All four processes consist of three technological separate sections.

  1. The production of synthesis gas (syngas).

The carbon and hydrogen are initially divided from the methane molecule and reconfigured by steam reforming and/or partial oxidation. The syngas produced, consists primarily of carbon monoxide and hydrogen.

  1. Catalytic (F-T) synthesis.

The syngas is processed in Fischer-Tropsch (F-T) reactors of various designs depending on the technology creating a wide range of paraffinic hydrocarbons product (synthetic crude, or syncrude), particularly those with long chain molecules (e.g. those with as many as 100 carbons in the molecule).

  1. Cracking – product workup.

The syncrude is refined using conventional refinery cracking processes to produce diesel, naphtha and lube oils for commercial markets. By starting with very long chain molecules the cracking processes can be adjusted to an extent in order to produce more of the products in demand by the market at any given time. In most applications it is the middle distillate diesel fuels and jet fuels that represent the highest-value bulk products with lubricants offering high-margin products for more limited volume markets. In modern plants, F-T GTL unit designs and operations tend to be modulated to achieve desired product distribution and a range of product slates.

 GTL process
Fig. 1 - GTL technological process with Fischer-Tropsch synthesis reactor

Research and development in GTL process and plant involves several part of the plant:

  • the production efficiency increment for each single unit used upstream and downstream
  • the catalyst into the FT reactor in order to increase its selectivity and durability
  • the design of the reactors to reduce the entire plant or module foot print

3. Start and Development

Synthetic fuel production technology, known as GTL, was invented in the 1920s. One of the best-known ways to create synthetic fuel is through Fischer-Tropsch (FT) synthesis. FT technology was initially developed in Germany to solve petroleum shortages leading up to World War. By 1944, Germany was producing 124 Mbpd of synthetic fuels from coal at 25 FT plants.

Next-generation technology was developed in South Africa, which sought to support its economy without oil. In the 1970s, the technology evolved in Western Europe and the US with big plant and large scale production.

Starting from the last decades, advances in GTL technologies have enabled small-scale GTL, and even micro-scale GTL, to be operationally and potentially economically feasible.

Several factors are converging to drive the growth in the GTL industry:

  1. Desire to monetize existing stranded gas reserves;
  2. Energy companies keen to gain access to new gas resources;
  3. Market demand for cleaner fuels and new cheaper chemical feedstocks;
  4. Rapid technology development by existing and new players;
  5. Increased interest from gas rich host governments

As petroleum prices remain high, new discoveries make natural gas abundant and cheap by comparison, and more advanced energy companies are exploring ways to reduce the CAPEX of synthetic fuel production. As part of this goal, companies are looking into building smaller-scale, modular plants that can operate in remote locations[1].

Several Gas-to-Liquids (GTL) technologies have emerged over the past three decades as a credible alternative for gas monetisation for gas-producing countries to expand and diversify into the transportation fuel markets. The final GTL product may be syncrude, which can be injected into an oil pipeline, thereby avoiding the need to transport another product to market, or higher-value liquid fuels or chemical feedstocks such as gasoline, diesel (without sulphur and with a high cetane number), naphtha, jet fuel, methanol or di-methyl ether (DME).


4. Plants and Projects


At present, five commercial-scale GTL plants are in operation (Fig. 1). These five plants include:

  • Bintulu GTL, Malaysia
  • Escravos GTL, Nigeria
  • Mossel Bay GTL, South Africa
  • Oryx GTL, Qatar
  • Pearl GTL, Qatar.

These five plants represent nearly 259 Mbpd of capacity. At 140 Mbpd, Shell’s Pearl GTL complex represents more than 50% of the world’s total commercial-scale GTL capacity.

  GTL plant world    
Fig. 2 - Commercial-scale GTL plants in operation around the world [2]

The first GTL plant was developed by PetroSA in 1992. This 36-Mbpd plant is in Mossel Bay, South Africa. The plant utilizes FT technology to process methane-rich natural gas into high-quality, low-sulfur synthetic fuels. Products include unleaded petrol, kerosene, diesel, propane, distillates, process oil and alcohols.

Shell commissioned its first commercial GTL plant in Bintulu, Malaysia in 1993. The plant’s initial construction cost was $850 MM. The 12.5-Mbpd plant underwent a $50-MM debottlenecking that increased total capacity to 14.7 Mbpd. Since 1993 has produced the following products: liquefied petroleum gas (up to 5%), naphtha (up to 30%), diesel fraction (up to 60%) and paraffin (up to 5-10%).

  Bintulu GTL plant
Fig. 3 - Bintulu GTL  plant [3]

The Pearl GTL complex is the largest GTL facility in the world. The 140-Mbpd facility is located in Ras Laffan Industrial City, Qatar. The $19-B natural gas processing and GTL integrated complex was developed by a JV of Shell and Qatar Petroleum.

Oryx GTL was the Middle East’s first GTL plant. Developed by Qatar Petroleum and Sasol, the $6-B plant also processes natural gas from Qatar’s North Field. Construction of the facility began in late 2003, and it started production in early 2007. The facility processes 330 Mcfd of methane-rich gas from Qatar’s North field and produces 34 Mbpd of liquids, with the majority being low-sulfur, high-octane GTL diesel.

The latest commercial-scale GTL plant to commence operations is the Escravos GTL plant. The $10-B facility was developed by a JV consisting of Chevron, Sasol and Nigerian National Petroleum Corp. The plant utilizes technology from both JV partners to convert up to 325 MMcfd of natural gas into 33 Mbpd of GTL diesel and GTL naphtha. The plant has been operational since 2014.


The ENVIA Energy’s GTL plant on the Waste Management landfill in Oklahoma came on line in 2017. The plant, partially fed with landfill gas, announced its first finished, sale able products on June 30 2017, but at January 2018, has not yet reached the 250 bpd design capacity.

The start-up of other 4 plants (Greyrock 1, Juniper GTL, Primus 1 and Primus 2) will happen in 2018.  The new owner of Juniper GTL, York Capital, will likely target future plant sizes of more than 5000 bpd (consuming 50 MMscfd of gas). Greyrock and Primus GE announced to continue strong business development efforts in the gas flare arena.

Haldor Topsoe has joined forces with Modular Plant Solutions (MPS) and has designed and engineered a small-scale methanol plant (215 tpd) called “Methanol-To-GoTM”. The size of the plant is similar to the Primus 1 and 2 plants with a gas feed rate of 7 MMscfd.

BgtL is a new player in the micro-GTL arena (20-200 bpd). However,  their  patented  technologies  are based  on  2 decades  of  R&D  work  in  research  institutes. Their portfolio  of products includes  plant  modules  that convert  gas  volumes  as small as 2  Mscfd into  a  range  of  products  including  oil,  diesel,  methanol  and others.

Summarizing, the current leading GTL technology providers with commercial offers are:

Micro-GTL: Unattended operation units below ~1MMscfd and below ~US$ 10mln

  • Greyrock
  • GasTechno
  • BgtL

Mini-GTL: Small modular plants with some operators and a cost >US$ 10mln

  • Greyrock
  • EFT/Black and Veatch
  • Primus GE
  • Topsoe/MPS
  • Expander Energy

More information on these companies and their projects can be found into the most recent bulletin on GTL technology [4].

In the following figure is reported the forecast furnished by EIA for GTL production in the next few years: futuremod
Fig. 4 - Global gas to liquid plant production, 2017 [5]

4.1  Available Technologies Overview

The GTL market is pushing toward small-scale and modular units. These types of plants can be built at greatly reduced capital cost, which can run into the billions of dollars for large-scale facilities.

Gas units, technologies used, size and other functional data for several companies involved in the GTL technology are summarized in the tables below[6]:

Calvert Energy Group/OXEON   GTL plant calvert
Fig. 5 - Calvert Energy Group GTL plant

The Calvert Energy Group offers modular GTL (Flare & Stranded Gas to Diesel plants ranging in size from 0.2 MMscf/d to 100 MMscf/d. The OEXON technology used is exclusively licensed to Calvert Energy Group by OXEON.

calvert energy group data

Tab. 1 -  Calvert Energy Group data
compact GTL
Fig. 6 - Compact GTL’s modular plant

CompactGTL’s modular unit offers a small-scale gas-to-liquid (GTL) solution for small- and medium-sized oil field assets where no viable gas monetization option exists so that the associated gas is either flared or reinjected.

compact gtl

Tab. 2 - Compact GTL’s modular unit data
    GasTechno Energy & Fuels (GEF)   LLC module  
Fig. 7  - Gas Technologies LLC module

Gas Technologies LLC manufactures, installs and operates modular gas-to-liquids plants that utilize the patented GasTechno® single-step GTL conversion process. GasTechno® Mini-GTL® plants convert associated flare gas and stranded natural gas into high-value fuels and chemicals including methanol, ethanol and gasoline/diesel oxygenated fuel blends while serving to reduce greenhouse gas emissions. The unit capital cost of the plants is approximately 70% lower than traditional methanol production facilities and they require relatively limited operation & maintenance costs.

  gas technotab
Tab. 3 - Gas Technologies LLC data
      Greyrock   greyrock energy module  
Fig. 8 - Greyrock Energy module P-5000

Greyrock Energy was founded in 2006 and is headquartered in Sacramento, California, with offices and a demonstration plant in Toledo, Ohio. Its sole focus is small-scale GTL Fischer-Tropsch plants for Distributed Fuel Production®, and it has a commercial offer of both a fully integrated 2000 bpd plant consuming about 20 MMscfd and smaller “MicroGTL” plants (5 – 50 bpd).

  greyrock energy data
Tab. 4 - Greyrock Energy data
    Velocys velocys gtl
Fig. 9 - Velocys plant

Velocys is a smaller-scale GTL company that provides a bridge connecting stranded and low-value feedstocks, such as associated gas and landfill gas, with markets for premium products, such as renewable diesel, jet fuel and waxes. The company was formed in 2001, a spin-out of Battelle, an independent science and technology organization. In 2008, it merged with Oxford Catalysts, a product of the University of Oxford. Velocys aims to deliver economically compelling conversion solutions. It is traded on the London Stock Exchange, with offices in Houston, Texas; Columbus, Ohio; and Oxford, UK.

velocys gtl

Tab. 5 - Velocys data
  Primus Green Energy   primus gtl
Fig. 10 - Primus System

Primus Green Energy is based in Hillsborough, New Jersey, USA. The company is backed by Kenon Holdings, a NYSE-listed company with offices in the United Kingdom and Singapore that operates dynamic, primarily growth-oriented, businesses. Primus Green Energy™ has developed Gas-to-Liquids technology that produces high-value liquids such as gasoline, diluents and methanol directly from natural gas or other carbon-rich feed gas.

primus gtl

Tab. 6 - Primus Green Energy data

5. Developments Remarks


By taking advantage of new technologies, such as microchannel reactors, to shrink the FT and SMR hardware, GTL plants can be scaled down to provide a cost-effective way to take advantage of smaller gas resources. GTL plants based on the use of microchannel FT reactors can be operated on a distributed basis, with smaller plants located near gas resources and potential markets.

Smaller, modular GTL plants are suitable for use in remote locations. In contrast to conventional GTL plants, they are designed for the economical processing of smaller amounts of gas ranging from 100 million cubic meters (MMcm) to 1,500 MMcm, and they can produce 1,000 bpd–15,000 bpd of liquid fuels. The plants can be scaled to match the size of the resource, expanded as necessary, and potentially integrated with existing facilities on refinery sites.

Smaller-scale GTL operations also pose a lower risk to producers. Since the plants are smaller, construction costs are reduced; and, since the plants are modular, investment can be phased. The construction time is short, at 18–24 months. In addition, because the modules and reactors are designed only once and then manufactured many times, much of the plant can be standardized and shop-fabricated in skid-mounted modules. This reduces the cost and risk associated with building plants in remote locations. In addition, the components can be designed to use standard, off-the-shelf equipment, so there is less strain on supply chains, and the need for onsite construction work is reduced.

Since the FT process also lies at the heart of the biomass-to-liquids (BTL) processes, the same technology can be used to produce high-quality, ultra-clean diesel and jet fuel from waste biomass, including municipal waste. Smaller-scale GTL plants offer advantages at all stages of production: upstream, midstream and downstream [7].


6. GTL-FT Technology New Concepts

The small-scale processing of natural gas needs principally new technologies for converting hydrocarbons into liquid chemicals and fuels. There are several possibilities.

The first one is to develop more effective, less complex methods for converting hydrocarbon gases into syngas.

  • A very promising way to increase the efficiency and flexibility of the conversion of hydrocarbon gases into syngas is the gas-phase combustion of very rich hydrocarbon-air or hydrocarbon–oxygen mixtures in volumetric permeable matrixes. The partial oxidation of hydrocarbon gases is very attractive method for small-scale syngas production since it is an exothermic process, which therefore requires no external heating and, consequently bulky and expensive heat-exchange equipment. This circumstance makes it possible to significantly decrease the size and, hence, the cost of the reformer.

The second is to work out principally different methods for the conversion of natural gas into chemicals without the intermediate stage of syngas production, working on the composition of the used catalysts or either by developing new ones.

  • An alternative possibility to produce useful chemicals and liquid fuels from natural gas is their direct oxidation. Several direct methods of natural gas conversion into useful chemicals without intermediate production of syngas can be discussed. Among them, the most known and developed are Direct oxidation of Actually, direct partial oxidation with subsequent carbonylation and/or oligomerization of oxidation products can beconsidered as an alternative route for Gas-To-Liquids processes, which enables to avoid syngas production, the most costly andenergy-consuming stage of traditional GTL [8].

With smaller-scale GTL plants, the greatest challenge is to find ways to combine and scale down the size and cost of the reaction hardware while still maintaining sufficient capacity. This, in turn, depends on finding ways to reduce reactor size by enhancing heat-transfer and mass-transfer properties to increase productivity and intensify the syngas-generation and FT processes. The use of microchannel reactors offers a way to achieve these goals.

  • Microchannel technology is a developing field of chemical processing that intensifies chemical reactions by reducing the dimensions of the channels in reactor systems. Since heat transfer is inversely related to the size of the channels, reducing the channel diameter is an effective way of increasing heat transfer, thereby intensifying the process and enabling reactions to occur at significantly faster rates than those seen in conventional reactors.

The technology can be applied to both highly exothermic processes such as FT, and highly endothermic processes such as SMR. Microchannel FT reactors contain thousands of thin process channels filled with FT catalyst, interleaved with water-filled coolant channels. Since the small-diameter channels dissipate heat more quickly than do conventional reactors, more active FT catalysts can be used to significantly accelerate FT reactions, thereby boosting productivity.

In microchannel SMR reactors, the heat-generating combustion and SMR processes take place in adjacent channels. The high heat-transfer properties of the microchannels make the process very efficient (Fig. 4).

FT reactor gtl
Fig. 11  - An FT microchannel reactor diagram (left), and the reactor in a full-pressure shell (right)[9]
  Additional improvement can be obtained by catalyst research.
  • INFRA Technology represents the new generation of GTL technology allowing the production of light synthetic crude oil straight out of the FT reactor, with four-fold performance and without byproducts (Fig. 12). The process does not require additional processing of waxes, and synthetic crude oil is fully compatible with the existing oil infrastructure.
Fig. 12 New Technology applications[10]

The technology was made possible by creating a novel catalyst using cobalt as active metal in a multicomponent composite. Elimination of certain processing stages and production of high-quality, single-liquid product makes INFRA’s GTL solutions economically feasible from small-scale, pre-engineered, standardized, modular (as small as containers), easily deployed and transportable units all the way to large-scale, integrated gas processing plants.


7. Cost Analysis

By offering the ability to target supply into global-liquid-fuel-transportation markets GTL plants significantly diversify market opportunity and help to smooth financial returns in volatile conditions where gas markets prices and oil and petroleum product market prices become decoupled.


7.1 Cash Flow Analysis Methodology to Evaluate the Commerciality of GTL Projects

There are several factors that determine the cash flow and income streams associated with GTL plants. The key factors required for a methodology that analyses the commercial attractiveness of a GTL plant in a multi-year cash flow model include:

  • Cost of feedstock (natural gas, coal, petroleum coke or biomass)
  • Prices of the petroleum products and chemicals produced and sold from the plants.

Those product prices are in most cases strongly influenced by benchmark crude oil prices. GTL products generally trade in price ranges that reflect prevailing refinery and petrochemical plant crack spreads. Sometimes GTL products trade at small premiums to refinery derived products because of their superior quality (i.e. low sulphur, low aromatics in the case of diesel and gasoline).

Aspects to be considered are:
      •  If the GTL project is an integrated project then revenue from natural gas liquids extracted from the feed gas stream need to be included in the project cash flow and income calculations
      • Capital costs to construct the GTL plant, which can be usefully compared by the unit US$/ barrel/day of plant product throughput capacity
      • How capital costs are offset, recovered and/or depreciated over time and deducted as part of a taxable income methodology
      • GTL plant efficiency (i.e. unit quantities of feedstock required to produce one unit of product) on an energy and/or mass basis
      • GTL plant annual utilization rate (days/year) based upon maintenance and turnaround requirements
      • GTL plant operating and maintenance costs including the costs of catalysts, chemicals, utilities
      • Cost of transportation (shipping) between the GTL plant and the market in which the products are sold
      • Fiscal deductions applied which vary significantly from jurisdiction to jurisdiction

7.2 Cost Forecast

FT technology typically has four components: synthesis gas (syngas) generation, gas purification, FT synthesis and product upgrading. The third stage constitutes a distinctive technology that provided the basis for future technological developments and innovations. The remaining three technologies were well-known before FT invention, and have been developed separately.

The syngas is normally produced via high-temperature gasification in the presence of oxygen and steam.

For the components of the plant, some aspects can be considered for cost analysis:

          • The air separation unit typically represents a considerable CAPEX investment.
          • The economic advantages or breakthrough is in small scale GTL plants have occurred with the advances in 4 areas:
            1. Commercial introduction of micro-channel F-T technology;
            2. Higher reactive cobalt catalysts;
            3. Mass production of F-T reactors;
            4. Modular construction of the plants.
          • Another fundamental challenge is that, due to environmental regulations, heavy feed slates (primarily asphalts and heavy fuel oils) are increasingly difficult to market and, therefore, become unwanted residues rather than revenue generators. GTL technology has a clear advantage here due to its complete lack of heavy slates. This may become a strong argument for GTL in the future, especially for FT installations within existing refineries that can be used to increase the share of light and middle distillates in the overall product portfolio[11].

8. Environmental Aspects and Benefits

GTL technologies can transform off gas streams, which would otherwise be flared into valuable liquid transportation fuels and chemicals, including high-quality gasoline or methanol or a separate stream of hydrogen-rich vent gas that can be used as an additional onsite hydrogen or fuel source, so this is an ideal solution for reducing gas flaring while boosting returns.

In addition, greenhouse gas emissions can be further reduced with GTL systems through the input of CO2 streams as co-feed which is converted into gasoline or methanol, representing a valuable use for what is typically considered a low-value or even negative-value gas stream.

Properties of GTL Fuel include the enhanced aquatic and soil biodegradability, lower aquatic and soil ecotoxicity. Fuels produced from the FT process offer significantly better performance than their petroleum-based equivalents. FT-derived diesel does not contain aromatics or sulfur, and it burns cleaner than petroleum-derived fuels, resulting in lower emissions of nitrogen oxide (NOx), sulfur oxide (SOx) and particulates. Exhaust emissions experiments on GTL products revealed an overall significant reduction of CO (22%–25%), hydrocarbons (30%–40%) and NOx (6% to 8%). GTL diesel has the potential to be sold as a premium blendstock[12].

The combination of these features indicate that GTL Fuel is less likely to cause adverse environmental impacts than clean conventional fuels. In addition, FT diesel can be blended with lower-cetane, lower-quality diesels to achieve commercial diesel environmental specifications.

When the feedstock includes a renewable component, whether renewable biogas (as in the case of the ENVIA Energy project), or forestry and sawmill waste (as in the case of Red Rock Biofuels’ proposed project in Oregon), the fuels produced deliver a significant reduction in lifecycle greenhouse gas (GHG) emissions over conventionally produced fuels.

Click here for some  video:
or contact us for more information about GTL technology.

[5]  EIA: International Energy Outlook, 2017.
[6] GGFR Technology Overview – Utilization of Small-Scale Associated February 2018


Fuel Oil With 0.5% Sulfur Content

Author: Mauro Capocelli - Chemical Engineer – Researcher –University UCBM – Rome (Italy)

1.Theme Description

The presence of sulfur compounds in fuel oils causes concern both during refining processes (due to catalyst deactivation and corrosion) and during the fuel end-use, since the fuel combustion generates the emission of oxides. The main environmental concern from SOx emission is related to respiratory problems.Sulphur oxides (with water) also produces sulphuric acid, the main cause of acid rain and corrosion. Furthermore, when the emissions are in the form of sulphate particles, sulfur also contributes to the formation of particulate matter.

The original content in crude oils (organic in the form of thiols, sulphides, and thiophenic compounds and inorganic such as S, H2S, FeS2) varies from 10-2 to 8 %w (see Figure 1). Globally, the S amount in the distillation fractions increases with an increase in boiling range and the class of aromatics is the most resistant to desulfurization.


Figure 1 - Main classes ofS-containing compounds incrude oil

Among the 100 mb/day of oil supply, about the 4% is represented by the oil-based marine fuel. Shipping is by far the main pathway of international commerce and its emissions have a worldwide dispersion (also affecting climate)[1]. For decades, the ISO has been accepted the limit of 3.5% sulphur for the heavy bunker fuel. To lower the pollution near ports, many governing bodies have established Emission Control Areas (ECA) in which the maximum sulphur(in burned fuels) is limited. The allowable level in these region has been reduced from 1.5 % (2010) to the present 0.1%. On the other hand, the International Maritime Organization (IMO) has planned to lower the sulphur content to 0.5%w from the 2020. Many Chinese ports, including Shenzen and Shangai, are going to implement the IMO compliance of 0.5% sulphur limit. These regulations require a very deep desulfurization to meet the ultra-low sulfur diesel (ULSD)specifications(15 ppm).According to McKinsey & Co., the shipping industry will react by switching to a combination of marine gasoil and low-sulfur residuals […] generating, very attractive investment on sulfur removal technologies.>>[2]


Figure 2 -  Sulphur content in bunker fuels according toIMO regulations.

2.Technological Options & Challenges

Foster Wheeler examined the impact of the new regulations on a typical refinery concluding that the new targets will be achieved by processing the crudes with the lowest S-content or by increasing the blending with distillates. From the market point of view, particularly if considering the SECA regulations, the distillate production will be under pressure and the new capital costs (upgrading/retrofit) will increase the price of bunker fuels up to the diesel level.[3]On the other hand, novel Desulfurization Projects (50-100 in the next 12 years) will be needed to produce ~200·106 tonnes/year of residue meeting the future specifications. In synthesis, the options available to meet the future environmental standards are:

  1. A switch in crude selection rather than investing in expensive desulfurization. This means an increased demand for sweet crudes (Africa, Southeast Asia) at the expense of sour grades (from Middle East);about that, Stockle and Knight assert that few crudes are able to produce  a  fuel  oil  meeting  this specification without  some  sort  of  residue upgrading/desulphurisation3.
  2. Blending with low-sulphur distillates. This could cause the cited economic consequences as well as possible technical issues for ship engineers in terms of engine failure (stability and compatibility).
  3. Moving towards alternative fuels, mainly LNG or Methanol LNG, now accounting for an additional 0.3 mb/d of bunker demand, represents an interesting opportunity for ferry routes and river transportation.
  4. Installation of advanced flue gas scrubbing technology; the main issue is the liquid discharge into the environment
  5. Development of novel “breakthrough“ desulphurization processes and/or drastic change in refinery operation. In the following, the main technological options of heavy fuel desulfurization are described.

2.1 Hydrodesulfurization (HDS)

It is the most common technique, already implemented in any refinery system, and needs hydrogen as a reactant and a catalyst (typically Co-Mo/Al2O3 and Ni-Mo/Al2O3) to convert sulfur compounds into H2S. Typical operating conditions are high temperature (>300) and pressures (>100 bar). Heterocyclic compounds are hardly removed (due to steric-hindered adsorption on catalyst surface) while thiols and sulfides are completely converted into H2S. This latter is subsequently separated from fuel oil sand oxidized into elemental sulfur (Claus Process). HDS can be applied to different streams of the overall refining process: i) Pre-upgrading (e.g. VGO hydro treating); ii) Residue upgrading gas well as iii) Whole Crude hydrotreatment generating directly low-sulphur crudes. These solutions are discussed in the report by Foster & Wheeler that also points out the increase in carbon emission related to the new refinery configurations able to meet these standards[4].

The overall effectiveness of HDS is limited by: i) metal content of heavy oils; ii) coking and fouling potential; iii) steric hindrance, during both the catalytic reaction and the adsorption.[5]In conclusion, to push forward the HDS in order to meet standards of ULSD means: high pressure and temperature (requiring high capital and operating costs), limited catalyst life and high energy and carbon footprint.

fig 2

fig 3

Figure 3- Possible locations of hydrotreating units in the oil refinery

2.2 Adsorptive desulfurization (ADS)

This process, consisting in confining S-compounds onto a solid matrix, depends on the selectivity of the sorbent as well as on the regeneration method. Several sorbent materials have been evaluated for both model oils and distillates: activated carbon, silica-aluminas, zeolites, Gallium+Y-zeolites, Cu-zirconia and metal organic framework[6]. Acceptable desulfurization levels can be achieved under mild conditions from the experimental point of view. On the other hand, the process reliability is still not sufficient for industrial applications. Moreover, heavy oils present large molecules that strongly reduce the adsorption efficiency due to steric hindrance.

2.3 Bio-desulfurization(BDS)

This process does not require hydrogen and external energy since it implements microorganisms to remove S atoms from organic compounds. It is still not practicable on industrial scale. Some experimental evidences have been presented in the literature for model matrix6.

2.4 Extractive desulfurization (EDS)

Extractive desulfurization does not require hydrogen and can be operated at mild conditions. On the other hand, the system thermodynamics influences the process efficiency since i) the solubility of the compounds in the solvent (acetone, ethanol, polyethylene glycols, etc.) limits the extraction yield, ii) the solvent and the oil should be immiscible to minimize the solvent losses; iii) the viscosity of fluids worsen the mixing, iv) the vapor pressure of the solvent limits the operating conditions; v) the solvent may contain other compounds extracted from the oil. Because of these drawbacks the energy footprint of the solvent regeneration could be very high.

2.5 Oxidative desulfurization (ODS)

ODS is a viable alternative to HDS since oxidized sulfur compounds can be “easily” removed. The subsequent separation can be achieved by physical methods (e.g. extraction by non-miscible polar solvent followed by gravity, adsorption or centrifugal separation); oxidized sulfur can be also removed by thermal decomposition. Follow by EDS, the oxidation does not mitigate the solvent loss and energy cost (abovementioned solvent regenerating issues) but increases the process selectivity.

The process require oxidant (H2O2 among the best, other represented in the figure below) a catalyst (e.g. acids) and a phase-transfer agent (PTA) when the mass transfer across the aqueous and oil phases represents the rate-limiting step (to enhance the kinetics of the liquid-liquid heterogeneous reaction system).


Figure 4 - Active oxygen for different oxidants

In factPTAs is able to form a complex with the oxidant in the aqueous phase transporting it across the interface. In synthesis, ODS can be obtained i) in an acidic medium, ii) by an oxidizing agent, iii) by autoxidation, iv) by catalytic oxidation, v) by photochemical oxidation, vii) by ultrasound oxidation.


Several companies and research groups introduced the intensification effect by means of Ultrasounds (US). SulphCo’s patented technology uses Ultrasound to induce cavitation in a water/oil stream[7]. During the Ultrasonic Cavitation (under the influence of the pressure rarefaction), cavities arise from dissolved gases by partial vaporization. Depending on the size of these cavities and the pressure variations, they undergo into a radial motion: the negative pressure induces expansion of the cavity until the attainment of a maximum radius. These vapor bubbles undergo a subsequent compression phase causing the rapid compression. The collapse dynamic is faster than mass and heat transfer (the temperature increasing is comparable with an adiabatic compression with heating rates > 109 K s-1) and leads to high pressures ( >100 bar) and temperatures ( > 5000 K ).;


where Ta is the ambient temperature, Pi is the pressure inside the bubble at its maximum size and Pa  is the ambient pressure at the moment of transient collapse. Thanks to these local extreme conditions, the collapsing cavity becomes an “hot spot”, concentrating the energy in very small zones. At the final moment of bubble collapse, wall motion is far more rapid than diffusion dynamics of water vapor: the entrapped molecules dissociates forming radical species.

On this basis, chemical reactions and physical consequences (intense shear, mixing and high localized pressure and temperature)induce and accelerate several chemical processes[8].

fig 5

Figure 5- Sinusoidal acoustic pressure and related single bubble radius-time curve (during acoustic cavitation).

SulphCO® Technology demonstrated the efficient conversion of sulfides and other S species to sulfones (easily removed by downstream separation). Several research groups have tested the US to globally overcome the mass transfer limits and increase the reaction kinetics. Akbari et al. investigated the intensification effect that US produces on the efficiency and the catalyst deactivation during the oxidative desulfurization of model diesel over MoO3/Al2O3.[9]Bolla et al. studied the phenomenology of US-assisted ODS of Liquid Fuels by simulating the bubble dynamics, the involved chemical reactions as well as by observing the combination of oxidizing agents (e.g. Fenton reagent) and ultrasounds[10].Bhasarkar et al. investigated the contemporary use of ultrasound and PTA for ODS. Good conversion has been observed in the simultaneous desulfurization/denitrification of liquid fuels in sonochemical flow-reactors[11]. Different improvements achieved by the US implementation in industrial desulfurization processes are described by Wu and Ondruschka 2010[12].

Ionic liquids (ILs)have been implemented for their extraction characteristics in combined EDS/ODS schemes (see Figure 6). ILs consist of organic cations and inorganic anions; they are high boiling solvents and can be tuned to meet the requirement of specific applications. Low viscosity ILs showed remarkable results for their regeneration (by a simple water dilution and vacuum distillation process).[13]

The process efficiency increases with oxidized compounds (sulfoxides and sulfones) but ILS are also able to obtain good removal of heterocyclic S-compounds. The possible reaction patterns, regeneration features as well as future challenges and perspectives have been described by Bhutto et al.[14]

fig 6

Figure 6 - Simplified process scheme of ILS-assisted desulfurization



[1]Di Natale, Carotenuto Particulate matter in marine diesel engines exhausts: Emission sand control strategies. Transportation Research Part D 40 (2015) 166–191
[3]M. Stockle and T. Knight,(2009) Foster Wheeler Energy Limited. Impact of low-sulphur bunkers on refineries.,Impact_of_low_sulphur_bunkers on_refineries.html#.WtTCx4hubIU
[4]M. Stockle and T. Knight,(2009) Foster Wheeler Energy Limited. Impact of low-sulphur bunkers on refineries.,Impact_of_low_sulphur_bunkers on_refineries.html#.WtTCx4hubIU.
[5] Bhutto et al., 2016. Oxidative desulfurization of fuel oils using ionic liquids: A review. Journal of the Taiwan Institute of Chemical Engineers Volume 62, May 2016, Pages 84-97.
[6]R.Javadli and A. de Klerk Desulfurization of heavy oil Appl Petrochem Res (2012) 1:3–19. DOI 10.1007/s13203-012-0006-6. Bhutto et al., 2016. Oxidative desulfurization of fuel oils using ionic liquids: A review. Journal of the Taiwan Institute of Chemical Engineers Volume 62, May 2016, Pages 84-97.
[7]SulphCO ®“Oxidative Desulfurization”. IAEE Houston Chapter June 11, 2009
[8] Capocelli et al., Sonochemical degradation of estradiols: Incidence of ultrasonic frequency. Chemical Engineering Journal 210, pp. 9-17
[9] A. Akbari et al. / Investigation of process variables and intensification effects of ultrasound applied in oxidative desulfurization of model diesel over MoO3/Al2O3 catalys. Ultrasonics Sonochemistry 21 (2014) 692–705.
[10] Manohar Kumar Bolla, Mechanistic Features of Ultrasound-Assisted Oxidative Desulfurization of Liquid Fuels.| Ind. Eng. Chem. Res. 2012, 51, 9705−9712.
[11]Gaudino et al., 2014 Efficient H2O2/CH3COOH oxidative desulfurization/denitrification of liquid fuels in sonochemical flow-reactors Ultrasonics Sonochemistry 21 (2014) 283–288.
[12]Z. Wu, B. Ondruschka.Ultrasound-assisted oxidative desulfurization of liquid fuels and its industrial application
Ultrasonics Sonochemistry 17 (2010) 1027–1032.
[13]R.Abro, A review of extractive desulfurization of fuel oilsusing ionic liquids, RSC Adv.,2014,4, 35302.
[14] Bhutto et al., 2016. Oxidative desulfurization of fuel oils using ionic liquids: A review. Journal of the Taiwan Institute of Chemical Engineers Volume 62, May 2016, Pages 84-97.

Potential Opportunities of Self-Healing Polymers

Author: Mauro Capocelli - Chemical Engineer – Researcher –University UCBM – Rome (Italy)

1. Theme Description

Polymers are widespread in different sectors, from packaging to construction. As shown in Figure 1, polymer production reached about 400 Mton in 2015[1] and is expected to grow with a CAGR of 3.9% in the period 2015-2020.[2]The production interests mainly the packaging (36%), building and construction (16%) and textiles (15%), while referring to the polymer type, the main ones are: PP (17%), LDPE (16%) and PPA fibers (15%).[3]The leading companies are Dow Chemical, BASF SE, Saudi Basic Industries Corporation, China Petrochemical Corporation, and Exxon Mobil.2 Whereas the main producing countries are China (29%), Europe (19%) and NAFTA (18%)[4].In this scenario among emerging polymers there are self-healing polymers that falls into the class of smart polymers[5]. It is considered that in 2025 these compounds could carry to4.1 billion of US$ with a CAGR of 27.2%.[6] Therefore in the following sections self-healing polymers and their characteristic are described.

  Fig. 1
Figure 1 - World Plastic Production referring to Use Sector and Polymer Type from 1950 to 2015.3

2. Self-Healing Polymers

Self-healing polymers are materials that have “the capability to repair themselves when they are damaged without the need for detection or repair by manual intervention of any kind.[7]When cracks begin these lead to the chain cleavage and/or slippage with the formation of reactive groups. These groups can form oxidative products or rearrange themselves to repair the leak.[8]According to the operation mechanism, self-healing can be divided into: extrinsic and intrinsic, automatic and non-automatic. In the first case the damage is repaired by means of an external agent put inside the matrix. The external agent can be liquid (confined into microcapsules, hollow fibers and microvascular networks) or solid (dispersed in a polymeric matrix). Whereas intrinsic ones can repair by themselves.[9] Referring to non-automatic materials, they need of an external stimulus such as light, heat, laser beam, chemical and mechanical to repair the crack. While for the automatic ones the repair is spontaneous.[10]

Fig. 2
Figure 2 - Schematic representation of extrinsic/intrinsic self-healing polymers.10
  Intrinsic Self-Healing Polymers

The cracks are repaired by local increase in the mobility of the polymeric chains. This is possible thanks to the reduction of the material viscosity and using an external/internal stimulus such as thermal energy, irradiation, pH changes, etc. (Figure 3).After cooling, the local properties are restored and the material can be used again. There are several parameters that can be modified to ensure good physical and mechanical properties: such as molecular weight, cluster distribution and size, crystallinity etc.[11]

  Fig. 3
Figure 3 – Viscosity and temperature trends from the damage to repair process.11

On the basis of the healing mechanism these compounds can be divided into: polymers based on reversible covalent bonds, supramolecular polymers and shape memory polymers. The first category includes several bonds such as disulphide, imine, acyl hydrazones etc.[12]However, the most common are based on Diels-Alder/retro Diels-Alder reactions.[13] These are called [4+2] cycloaddition reactions because involve 4π electrons of the diene and 2π electrons of the dienophile.  The most known and used system is the furan/maleimide due to low healing temperature near to 100 °C (for more details can be consulted A. Gandini).[14] In supramolecular polymers[15], monomers are held together by means of non-covalent interaction such as hydrogen bonding, π-π stacking interactions, metal ligand complexes and ionomers.9 Compared to covalent bonds, non-covalent ones are weaker but more reversible. The shape memory polymers[16], instead, are compounds that can be plastically deformed, but by means of external stimuli such as heat, light etc. can return to the original shape. The matrix is usually composed by two domains: one acts as netpoints defining the original shape of polymer and the other one acts as molecular switches having memory of the original shape. A trade-off between mechanical strengths and healing capacity is represented by polymer blends[17] (for more detail can be consulted L. A. Utracki et al.).[18]

  Extrinsic Self-Healing Polymers

Unlike intrinsic self-heling polymers, the extrinsic ones need of external agent, placed inside the material matrix, to repair the damage. The healing agent can be confined as liquid into capsules or networks such as capillaries and hollow fiber or blended as solid into the polymer. The healing agent is then released due to the rupture of these containers reaching the cracks by means of capillary forces. Microencapsulation and Microvascular network are the most common techniques for making extrinsic self-healing polymers. In the first case the healing agent can be encapsulated by means of the reactions of several mixtures (urea-formaldehyde, melamine-formaldehyde etc.)  in an oil-water emulsion(in situ and interfacial techniques) or by the dispersion of the key component in a melted polymer. This compound is emulsified and solidified by changing the temperature or removing the solvent.[19] It is necessary that the healing agent has low viscosity, good wettability and minimum loses due to volatilisation or diffusion into the polymer matrix. Form the first system based on styrene/polysterene blends and phenolic based resin we move on dicyclopentadiene monomer (DCPD) with “Grubbs catalyst” up to the polydimethylsiloxane (PDMS).[20]Referring to vascular networks the most common technique is based on hollow glass tubes with different configuration: all tubes are filled with only one type of resin such as epoxy particles or cyanoacrylate or with two “adhesives” such as epoxy and its curing agent. Otherwise one of the compounds can be injected in the tubes and the other one in microcapsules.[21] However these techniques allow to create 1-2D networks. An emerging method consists of making a scaffold that after solidification is removed from the polymer matrix. This allow to create a 3D structure. The healing agent is then injected in the network.19

  Fig. 4
Figure 4–A) Operatingmechanism of capsuled and vascular networks,[22]B)SEM of the rupture of anurea-formaldehyde microcapsule in a thermosetting matrix[23], C)Optical Image showing the released of healing agent.[24]
  Healing Efficiency

The main techniques used to evaluate the healing efficiency are Undamaged Tapered Double Cantilever Beam (TDCB) and Tear Test. In the first case the crack is generated in the center of the sample and is propagate until failure. Then the coupon is repaired by means of healing properties of the material and loaded again. Whereas Tear Test is used for elastomeric material such as PDMS. The rectangular sample has an axial cut and two legs that are loaded until the cracks propagates to the rest of the material.  The healing efficiency is worked out comparing the property of virgin sample.[25]

formule   Where KIC fracture toughness, PC critical fracture load, T tear strength and FAvg is the mean tearing force.   Fig. 5
Figure 5 - A) TDCB sample[26], B) Tear Test.[27]

3. Last Advancement in Self-Healing Polymers

Among intrinsic self-healing polymer an emerging technique is represented by the injection of a thermoplastic particles (250-425 μm) of polyethylene-co-methacrylicacid (EMAA), into diglycidyl ether of bisphenol A (DGEBA) epoxy resin polymerized with triethyltetramine (TETA). The TDCB test performed at 150°C for 30 minutes showed a healing efficiency of about 85%.  This was achieved by the formation of bubble that expanding forcing the healing agent into cracks.[28]

Keller at al. in their first work tested a matrix of Sylgard 184 PDMS provided by Dow Corning in which the healing agent was confined into two different urea-formaldehyde capsules: one containing a vinyl terminated poly-dimethyl siloxane (PDMS) resin and platinum catalyst and the other containing a PDMS copolymer diluted with a 20 wt% of heptane to reduce the resin viscosity. Therefore, polymer and healing agent have the same nature. Tear tests showed a healing efficiency ranging between 70-100%.[29]In a subsequent work the same polymer and the elastomer RTV 630 provided by GE Silicones were tested under torsional fatigue. The experiments involved four samples for each compound with different amount of substance in both capsules. The results showed that torsional stiffness was recovered after 5hours while the fatigue crack was reduced by 24%.[30]

Toohey et al. instead tried to mimic human skin creating a 3D microvascular network covered by an epoxy substrate.  The coating contained “Grubbs” catalysts while the structure was filled with DCPD healing agent. Furthermore, an acoustic emission sensor was used to detect the crack events. The concentration of catalyst was increased up to 10 % w/w showing a maximum number of cycle equal to seven.[31] Therefore, to obtain a greater number of cycles this structure was modified by introducing a multiple isolated network structure where different healing agents can be confined. In this way a two part (epoxy resin-amine harder)altering structure was obtained and the number of cycle was increased up to 16.[32]

An exhaustive description of the last advancement in self-healing polymers can be found in Zhag et. al [33] and Mauldin et. al20.

Fig. 6
Figure 6–Operation of EMAA particles.[34]

4. Conclusions

Self-healing polymers are promising smart materials that try to mimic the nature (i.e. healing a skin wound, broken bone etc.) repairing themselves without an external intervention of any kind (i.e welding, fusion etc.)10. These compounds can be applied in several sectors from packaging up to aerospace[35], from coating to corrosion prevention22and it is estimated that in 2025 could have a market size of 4.1 billion of US$ with a CAGR of 27.2%.6Nowadays,they are divided into extrinsic and intrinsic, automatic and non-automatic polymer depending on the mechanism of action. Some emerging material are listed from EMAA particles up to 3D microvascular network. However, these works are concerning the laboratory scale and only few products are available. Therefore, more efforts are necessary for the commercialization.

[7] Wilson, G. O., Andersson, H. M., White, S. R., Sottos, N. R., Moore, J. S. and Braun, P. V. 2010. Self-Healing Polymers. Encyclopedia of Polymer Science and Technology.
[8] Y. Yang and M.W. Urban, Self-healing polymeric materials, Chem. Soc. Rev., 2013,42, 7446-7467.
[9]G. Li and H. Meng,Recent Advances in Smart Self-healing Polymers and Composites, Woodhead Publishing Series in Composites Science and Engineering: Number 58, 2015.
[14] A. Gandini, The furan/maleimide Diels–Alder reaction: A versatile click–unclick tool in macromolecular synthesis,Progress in Polymer Science, 2013, 38 (1), pp 1-29.
[18] L.A.Utracki, C. Wilkie, Polymer Blends Handbook, 2nd edition, Springer 2014.
[28]S. Meure et al.,Polyethylene-co-methacrylic acid healing agents for mendable epoxy resins, Acta Materialia,  57 (14)   2009, pp. 4312-4320.
[33] Zhang et al.Basics of self-healing: State of the art. In Self-Healing Polymers and Polymer Composites, JohnWiley & Sons, 2011; pp. 1–81.
[35]Wolfgang H. Binder, Self‐Healing Polymers: From Principles to Applications, Wiley‐VCH, 2013.

Hydrodynamic in Nanopores: Applications for Recovery of Unconventional Resources or for Energy Storage

Author: Giovanni Franchi-Chemical Engineer-PhD Student -University UCBM - Rome (Italy)


1.Theme Description

World oil demand is growing steadily. Today it reaches about 100 million b/d[1]. Conventional oil reserves are about 1/3 of the non-conventional ones[2] such as: heavy oil, tight oil, shale gas, methane hydrates etc. These resources are deployed on extensive areas and need of specific technologies to be extracted. Hence nowadays, they are very expensive compared to the conventional ones.[3],[4]Several Enhanced Oil Recovery Technologies exist (Thermal, Gas and Chemical) but they don’t exceed 40% of recovery. Hence, to increase this percentage isnecessary to better understand the transport of oil and gas into nanopores rocks.Indeed, due to dimension of pores and the rock heterogeneity the flow description with conventionalmathematical modelareno longer suitable[5].In the following sections the flow in nanopores rocks, the mathematical tools, simulations and experimental studies are described.

2.Transport in Material with Complex Pore Geometries

The flow through the nanopores rocks takes place within channels less than 100 nm[6] and can’t be described by conventional models. Unlike conventional reservoirs, theunconventional ones, indeed, have worse features of porous bed. The porosityis between 2-6%, the permeability can change quickly from 0.001 μD up to 1 mDand the system is oil wet rock (the contact angle between fluid and rock is more than 90°C).[7] Referringfor example to tight oil, the pore diameter is between 30-200 nm including micro-macro and meso-pores. The reservoir is formed by several zones such as oil+mobile water and gas+oil+immobile water as shown in Figure 1. The oil productionreaches low flow rates in 9-12 months. Therefore, as described in the following sections several techniques have been studied to enhance oil recovery.[8]

Figure 1 -Conventional Reservoir Vs Tight Oil.[9]
    Flow Regimes

The flow depends on Knudsen number5 and due to pore diameter, it isn’t continuum. Therefore, it can’t be described by Darcy law, but slip, transitionand free-molecule flow need to be considered. Boltzmann equation can be solved to describe the flow (Figure 2), but to reduce computational costs it is solved only for simple problems. Hence, several mathematical models are used such as Molecular Dynamics (MD), Direct Simulation Monte Carlo, Burnett equation and reduced order Boltzmann equation (LBM and Grads)[10]. Hou et al.[11]has proposed to combine the positive aspects of LBM and MD methods. In this way, MD is suitable to describe the fluid flow near the surfacesofporous media while LBM allows to describe the rest of the flux,saving time by means of simplified kinetics models.

Figure 2 -Flow Regimes depending on Knudsen number[12]
    Computational Analysis

On computational level the porous medium can be simulated in different ways. For example, Unfractured Porous Media can be described by means of[13]:One-Dimensional Models, where pore spaces are considered like a series of capillary tubes in which the radius can be the same for all or not. The model can take into account the tortuosity, but itcan’t describe the interconnectivity of the pores.Continuum Models,where the domainis considered as a distribution of identical spheres. The model can represent anunconsolidated or consolidated porous medium depending on the overlap of the interconnections.Random Hydraulic Conductivity Models,in which domainis divided into rectangles with a random hydraulic conductivity.While, referring to Fractured Porous Media the principlemodels are14:Models of a Single Fracture,where the simplest model is represented by two parallel flat plates. It can be solved analytically, but it isn’t suitable to describe the internal morphology of the fracture; indeed, it doesn’ttake into account the roughness of the fracture. Models of Fracture Networks,in whichfractured rocks are described as a network of interconnected elements. In this way is possible to describe the flow in the fractures by means of 2D and 3D models. Models of Fractured Porous Mediaare suitable for describing flow in matrices with high permeability. These models include double porosity andpermeability models (see for example the model used byFragoso Amaya[14]). In the former the matrix acts as medium storage, while in the latter both matrix and fractures networks contribute to transport and fluid flow.

3. Methods to Improve the Recovery of Chemical Transformation Processes

There are several techniques that allow to improve oil recovery and can be classified into primary, secondary and tertiary recovery[15],[16]. The former consists of the extraction of oil via natural rise or pumps. It let to recover only 5-15% of hydrocarbons. Secondary recovery, instead, consists of the injection of water/gas in the reservoirs. It let to reach 30% of recovery while Tertiary recoverytries to make the ground more suitable to the extraction of oil. Currently these technologies don’t exceed 40%.[17]Oil recovery from reservoirs, indeed, depends on different factors such as the Mobility Ratio (M) and Capillary Number (Nc)[18].The first represents the oil capacity to move through the pores. If M >1, more fluid needs to be injected to obtain an optimal oil saturation into the pore. While M <1, means that mobility ratio is favourable. This can achieve by reducing viscosity of oil (i.e. with thermal techniques) or by increasing viscosity of displacing fluid (i.e. with chemical techniques). The capillary number, instead, measures the relative weight of viscous forces against interfacial tension. In the following section the main techniques to improve oil recovery are described.

  Thermal Enhanced Oil Recovery (TEOR)

This technique is applied to heavy crude oil with[19]: API Gravity between 10-20°, reservoirs depth less than 3000 ft, permeability of 500 mDand sand thickness between 30-50 ft. It includes Steam Injectionand In-situ combustion. The firstconsists of the injection of hot steam into the reservoirreducingviscosity of heavy oil and increases the pressure[20]. Steam can be injected periodically (Cyclic steam Injection)[21] or by means of two horizontal wells (Steam assisted gravity drainage, SAGD), where the oil is drained into the lower well by means of gravity[22]. In situ combustionconsists of the injection of dry air or wet air into the reservoir. The combustion of part of the heavy oil (5-10% of the crude oil)[23] generates a combustionfront that flows along the reservoir. This front is sustaining by means of the coke present in the reservoir or in the case of wet air by means of steam produced.[24],[25]

  Gas Enhanced Oil Recovery (GEOR)

This technology includes Miscible Gas Injectionand Immiscible Gas Injection. In the former CO2 or N2are used to increase oil recovery. As shown in Figure 3 a) the carbon dioxide is injected at 1200 psi and density 5 lb/gal, it mixes with oil trapped into pores forming a concentrated mixture that goes back to the surface. Then, CO2is removed from the mixture, recompressed and injected again in the reservoir [26].

The CO2 flooding is also a promising technique for tight oil reservoirs. Indeed, waterflooding could form a film on the pore surface decreasing the recovery. In figure 3 b) is shown the common techniques used in tight oil. The wells move vertical until tight formation and then parallel to reservoir. The gas in injected to fractur the rocks allowing to oil to move into wells.[27]

Figure 3– a) Waterflooding and Carbon Dioxide injection; [26] b)Fracking in tight oil.[28]
  The Immiscible Gas Injectionconsists of the injection of gas under Minimum Miscibility Pressure (MMP). This technique is suitable for light oil rather than heavy oil.[18]   Chemical Enhanced Oil Recovery (CEOR)

In the case of heterogeneous reservoir CEOR is better than GEOR. This technique, indeed, reducesthe interfacial tension, wettability and mobility.[29]It includesPolymer Flooding, Surfactant Flooding and Alkaline Flooding.The formeris used to minimize bypass effects due to capillary forces and to increase water viscosity. Usually, the polymers injected in the reservoir are about the 30% (minimum) of the reservoir pore volume. They can be divided into two categories biopolymer and synthetic polymer[30]. Surfactant, instead, reduces interfacial tension between oil and water and alters wettability, butpart of these substancesis adsorbed onto the rock surface.[31]Alkaline flooding is very efficient in reservoirs with high acid content. Indeed, the alkaline reacts with the acid form a surfactant solution that allows to reduce interfacial tension, emulsification and alters wettability.[32]Combinations of the previous solutions such as Surfactant Polymer Floodingand Alkaline Surfactant Polymer Flooding are often used.

  Nanoparticles to Enhance Oil Recovery

Nanoparticles are having great attention as emerging technologies to be employedin oil & gas field. These materials, indeed, could be used as sensors to be injected into the wells to understand the property of reservoir (pH, hydrocarbon saturation etc.) or as “smart-fluid” for increasing oil recovery altering wettability (more water-wet), improving mobility ratio and reducing interfacial tension[33].“Smart fluid” can be divided into three groups: metal oxide (Al2O3, CuO, Fe2O3/Fe3O4 etc.),organic (i.e. carbon nanotubes) and inorganic (i.e. silica).[34]In Figure 4 is represented the structure of nanoparticles used to evaluate the oil recovery of Berea sandstone  sample having 17.45 API, air and liquid permeability of 184 mD and 60 mD respectively and a porosity of 20%. The better response is given by a mixture of aluminium oxide and silica oxide at a concentration of 0.05 wt. due to reduction of interfacial tension.[35]

Among them emerging nanoparticlesare represented by carbon nanotubes(CNT). These compounds fall in fullerene category, have good resistance to corrosion. They can be arranged in single or multiple wall made of graphene and the surface is hydrophobic with high slip length.6,34For other applications of nanoparticles in oil and gas industry such as corrosion inhibition, methane release from gas hydrate, etc. it can be consulted Fakoya et al.[36]

Figure 4- Smart fluid application on Berea sandstone sample: (a) titanium oxide, (b) aluminium oxide, (c) nickel oxide and (d) silica.[37]

4. Simulation Studies and Experimental Works

In literature there are several simulation studies some of them are summarized in this section.Moraes de Almeida et al.[38]described the fluid flow of water and light crude oil on silica nanoporesby means of Molecular Dynamics. The nanopores were simulated with two hydrophilic terminations (silanol and siloxane rich) and three different scenarioswere considered: water/oil infiltration on empty nanopores and water infiltration on oil filled nanopores and vice versa. For empty nanoporesboth water and oil infiltrated quickly (0.5 ns for oil and 1 ns for water) and the interfacial tension was reduced of about 35% for oil/siloxane terminations. For the other cases water infiltration on water/oil filled wasensuredat10 and 5000 atm respectively while oil infiltration on water filled occurs at 600 atm. Ross et al.[39]studied friction coefficient for the fluid flow of water inside flat graphitic slabs (5 x 5 nm) and inside/outside carbon nano-tubes (5 nm length) varying the characteristics length of the two configurations.Molecular Dynamics model was used considering no-slip conditions at solid-fluid interfaces. In this way was possible to calculate the slip length. Tests showed that friction coefficients depended on the curvature of porous surfaces. In particular, they were higher in presence of convex surfaces and lower for concave ones.Lee et al.[40]treated hydrocarbon recovery from shale gas. They simulated kerogen structure by means of several models (disordered, ordered and composite) based onmolecular and statistical simulation.The recovery depends on interfacial tension and is thermally activated. Particularly the energy barrier is strong for immiscible fluids such as water while it is less for miscible ones such as CO2 and C3H8. Despite carbon dioxide, propane is recovered together with the methane extracted.

Figure 5– Model simulations and Results in presence of Water: a) I,II and III represent three different structures considered in the simulations; b) I and II outline the starting and end points of the simulation where the methane is trapped inside a CNT membrane with a triangular shape (yellow). The left side is set at constant pressure by methane while the right side is maintained at low pressure by water; c) It is shown the amount of methane extracted from the pores as function of time.[41]

Alfarge et al.[42] simulated oil recovery from Bakken formation injected three different miscible gases such asCO2, lean and rich gas. The well was stimulated by means of 5 hydraulic fractures spacing of about 200 ft. The test showed at first high production but then a rapid decline due to reduction of pressure nearby the production well.Three different scenarios were simulated changing the number of cycles from two to ten, the duration of injection from two months to six and the duration of soaking from one month to three.The use of CO2 increased molar diffusivity, while rich gases needed a major soaking period despite lean gases that requiredmore volume to be injected. Prajapati et al.[43]simulated the flow through shale reservoirs. They considered a binary mixture of CH4-CO2 flowing through a kerogen matrix by means of four models: Wilke, Wilke-Bonsaquet, Maxwell-Stefan and Dusty Gas Model. This led to a system of nonlinear equations solved by means of COMSOL Multiphysics. It was demonstrated that Knudsen diffusion and binary molecular diffusion had to be considered, indeed the flux is 10 times higher in Wilke, Maxwell-Stefan rather than Wilke-Bonsaquet, Maxwell-Stefan and Dusty Gas Model.Regarding to pilot tests, in 2010 there were about 1500 EOR (i.e. Carabobo[44], Grosmont[45]etc.) of which 78% refers to sandstone, 18% to Carbonate and 4% to turbidite and offshore fields. Among EOR technologies thermal and chemical projects are widespread in sandstone while gas and water recovery in the rest.[46]One of the most interesting project concerns Bakken formation one of the biggest oil and gas reservoir in the USA. It is estimated that this geological formation could yields until 40 billion barrels[47], but only 10% is nowadays recovered due to low permeability (0.0018-0.0036 mD).[48] Therefore from 2008 to 2014 seven pilot tests are performed to improve oil recovery: 2 in Montana and 5 in North Dakota. Several techniques are used: cyclical injection with CO2 and water, flooding with water and enriched natural gas and vertical injection with CO2. Despite ultra-low permeability emerges that injectivity doesn’t be an issue for either gas or water. However, increasing in oil recovery is low. Therefore, new tests need to be performed to understand fractured networks, flow in nanopores rocks and collect more data. This can be achieved by means of cores from vertical and later section subsequently analysed in laboratories. (for more information about pilot tests see[49]).

5.Energy Subsurface Storage

The most mature and widely used technology is the Underground Gas Storage (UGS). Nowadays, indeed, there are 630 underground gas storages[50]. The gas is injected, from the pipeline to the ground such as depleted oil reservoirs when the demand is low and is used when the demand grows. The storages don’t have 100% efficiency because part of the gas called “cushion gas” remains in the subsurface to maintain pressurized the reservoir.[51] A promising technology is the Carbon Capture Storage (CCS)of CO2 where the gas injected in the subsurface can work as a displacing fluid (see Section Gas Enhanced Oil Recovery) or can be stored. Generally, it is injected at a depth of about 800 m where CO2 is in a liquid or supercritical state. It can be stored by a “cap rock” such as clay rock that is impermeable to CO2 or by capillary forces that block the CO2 in pores.[52]

Figure 6 - Applications of Carbon Capture Storages.[52]

6. Conclusions

Nowadays technologies to Enhanced Oil Recovery of unconventional hydrocarbons and Energy Storages exist. The most widespread are TEOR (ThermalEnhanced Technology) and Underground Gas Storage but they don’t achieve high efficiency.

Several mathematical models are used to describe the flow in porous rocks. However, porous media have a chaotic configuration and the equation of transport can be resolved analytically only in few cases. Furthermore, the models are based on simplified hypothesis that allow to describe a specific phenomenon. Therefore, is necessaryto continue investigating the hydrodynamics in nanopores rocks by means of pilot tests (i.e. Carabobo, Grosmont, Bakken etc.) In this way is possible to improve technologies and models that allow to describe the phenomena exhaustively.Among emerging technologies, nanoparticles (i.e. silica, CNT etc.) can be a pivotal role in increasing oil recovery. However, these compounds are tested only on laboratory scales and are very expensive. Therefore, is necessary to reduce the cost of production by having better performances with lower concentration

[4]A. Muggeridge et al., Recovery rates, enhanced oil recovery and technological limits, Phil. Trans. R. Soc. A 372: 20120320.
[7]A. Satter,G. M. Iqbal, Reservoir Engineering, The Fundamentals, Simulation, and Management of Conventional and Unconventional Recoveries,Gulf Professional Publishing 2016.
[13]M.Sahimi, Flow and Transport in Porous Media and Fractured Rock: From Classical Methods to Modern Approaches, 2nd Edition, Wiley-VCH 2011.
[15]Nowadays these different technologies are grouped into two categories: IOR (Improved Oil Recovery) that includes secondary and tertiary recovery and EOR (Enhanced Oil Recovery) that includes only tertiary recovery.
[18]James G. Speight, Introduction to Enhanced Recovery Methods for Heavy Oil and Tar Sands, Gulf Professional Publishing, 2016.
[20]G. Chunsheng, NumericalSimulationofSteamInjectionfor HeavyOilThermalRecovery, Energy Procedia   2017, 105, pp 3936 – 3946.
[21] J. Alvarez and S.Han, Current Overview of Cyclic Steam Injection Process, Journal of Petroleum Science Research 2013, 2(3), pp 116-127.
[24] N. Mahinpey, Insitu combustion in Enhanced Oil Recovery (EOR): a review, Chemical Engineering Comunications 2007, 194 (8), pp 995-1021.
[29]S. Kumar and A. Mandal, A comprehensive review on chemically enhanced water alternating gas/CO2 (CEWAG) injection for enhanced oil recovery,Journal of Petroleum Science and Engineering 2017, 157, pp 696-715.
[30]A. Thomas, Polymer Flooding, INTECH 2016.
[31]J.J.Sheng, Status of Surfactant EOR Technology, Petroleum 2015, 1(2), pp 97-105.
[50] C. Gniese, Relevance of Deep-Subsurface Microbiology for Underground Gas Storage and Geothermal Energy
Production, Adv BiochemEngBiotechnol 2014, 142, pp 95-121.
[51] B.M. Freifeld et al., Well Integrity for Natural Gas Storage in Depleted Reservoirs and Aquifers, DOE National Laboratories Well Integrity Work Group, 2016.

Latest Advancements in Process Control in Refineries and Chemical Plants

Author: Giovanni Franchi-Chemical Engineer – PhD Student –University UCBM – Rome (Italy)

1.Theme Description

The world production of chemicals in 2020 will increase of 144 Million of metric tons[1] with a market of 4.650 US$ billion.[2] Automation and Process Control have a pivotal role in industrial plants, indeed, they allow to improve products quality, plants efficiency, the safety and reliability of the processes.

The automatic feedback controls were introduced in 1920-1930s mounted on the controlled equipment. Since then, process control has spread rapidly from the first digital devices at the end of 1950s up to Programmable Logic Controllers (PLCs) and Distributed Control System (DCS) in 1970s.Nowadays networks of computers manipulate thousands of variables, but 85-95% of feedback control loops are based on Proportional-Integrative-Derivative (PID) system developed in 1930s. Furthermore, the flow rates of liquid and gases are controlled by pneumatic valves.[3]

The use of advanced controls can increase plant’s profit margin between 10-20% and reduce emissions of about 70%[4].Therefore, in the following sections PID tuning optimization, APC (Advanced Process Control) and MPC (Model Predictive Control) are described. Finally, an overview on the latest software in process control and “smart control” are discussed.    

2. Process Control

The Aspen Technology Inc. has defined five levels of maturity for a refinery and chemical plant depending on the control level, from level zero where no process simulation is used up to level four, where several models are reported in a single flowsheet and engineers can make decisions by monitoring key parameters.[5] As can be seen in Figure 1, a plant, usually runs in a safety zone called “comfort zone” away from constrain limits. With PID optimization and APC is possible to reduce from three to ten times the amplitude of oscillations working near constrain limits and increasing productivity and profit margins.[6]

Comfort zone and pOsitive Effects
Figure 1 - "Comfort Zone" and Positive Effects by using PID tuning optimization and APC.6
Therefore, in this section PID tuning optimization and APC controls are described.   PID Controls

The PID controls are the most common control used into chemical and petrochemical plants due to easy implementation and robustness. (Figure 2)

The controller takes a corrective action depending on the magnitude of the error[7]:

  • proportional action cuts off most of the errors;
  • integral action reduces steady-state error or off-set;
  • derivative action decreases maximum overshoot.
  PID control loop
Figure 2 - PID control loop.[8]

In order to have desirable outputs it can be used a separate controller for each variable (decentralized strategy) or a single controller that manipulate all the variables (centralized strategy)3.It is usually used in parallel form expressed as follow:

Where,= bias (steady-state) value; Kc = controller gain; e(t) = error signal set equal to (set point – present value); τI = integral time or set time; τD = derivative time.

Tuning Optimization

There are several tuning techniques that allow to find proper PID parameters.

These methods were developed since 1940s and nowadays can be divided into two categories: Classical and Artificial Intelligent methods.

Classical methods include: Ziegler and Nichols and Cohen and Coon Methods. Ziegler and Nichols proposed two methods, the first called “step response” that can be applied only to open-loop stable plants. It considerers, indeed, the response of industrial process such as a S-shape without overshoot. As can be seen in Figure 3,the delay time (L) represents the intersection of tangent line at inflection point of the curve and x-axes. While time constant (T) represents the intersection with steady state line. From these values is possible to find PID parameters.

Parameters of the First Ziegel Nichols Meth
Figure 3 – Parameters of the First Ziegler Nichols Method.[9]

The second method is called “continuous cycling method”. It allows to find the critical frequency of the system by increasing the proportional gain until stability limit. The two parameters that describe the response of the system are KCU(ultimate gain) and PU(ultimate period). In Table 1 are shown the relationship between PID parameters and the two Ziegler and Nichols methods.

Table 1–Parameters of the Step Response and Continuous Cycling Method.[10],[11]

Ziegler and Nichols methods are suitable for level control but not for flow, liquid pressure that require a rapid response.[12]In these cases Cohen and Coon methods are used.[13]This method finds three poles: two complex and one real that allow to minimize the integrated error and to have a decay ratio of about ¼.[14]

Artificial Intelligent Methods include dozens of methods. Some of them are described, such as Genetic and Differential Evolution Algorithms. For the other ones such as Simulated Annealing (SA), fuzzy system, Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) etc. it can be consult references[15],[16]  .

Genetic Algorithm starts from a random population of binary strings called chromosomes each of them represent the solutions of the problem. The strings are encoded to real number that defines PID parameters. These values are elaborated by PID controller and the response is evaluated by means of objective function such as MSE (Mean Square Error), IAE (Integral Absolute Error), ISE (Integral Squared Error) etc. The fitness values are subjected to a process of selection, crossover and mutation until best fitness is obtained.[17]

Flow chart of Genetic Algorithm
Figure 4 - Flow chart of Genetic Algorithm.[18]

Differential Evolution Algorithm, instead, starts from the initialization of real encoding matrix where rows represent PID parameter while columns represent i-th population vector. Each population is evaluated by PID controller and the result represent the fitness value. Then crossover step that involves target vector (first vector of population)and mutual vector (three random vectors from population are selected) generates a trial vector whom fitness values are elaborated by PID. Finally, fitness values of target vector and trial vector are compared to select the minimum value. In this way the individual of new population is generated. The algorithm stops when new population is completed.

Flow Chart of Differential Evolution
Figure 5 - Flow Chart of Differential Evolution.[19]

The main limit of feedback control is that corrective action takes place only when output is perturbed from its set point. Therefore, more advanced controls were developed such as PID plus feed forward that allows to intervene before disturbance takes place or cascade composed by two controllers, two sensors and one actuator acting on two processes in series.[20]Despite PID controllers ensure good stability and suppression of the disturbances, process performance optimization fails due to the multivariable nature of it and the complex interactions between controlled variables. Therefore, Advanced Process Control are necessary.[21]

3. APC

The APC includes all software that allows to control critical variables and predicts quality in real-time such as:

  • Statistical Process Control (SPC)that uses a random sampling and statistical analysis to identify causes outside the process that changed the quality of a product. This method is used especially for manufacturing lines, but it fits well also process where output can be measured. [22]
  • Run2Run (R2R) is widespread in the semiconductor industry (for more information see Moyene et al.[23]), but it can be applied also to batch processes such as chemical vapor deposition or batch chemical reactors10. The quality of the product is evaluated at the end of the run and the set-points are changed between two successive runs. Therefore, this control is used when there aren’t enough on-line measurements of the interest products.
  • Model Predictive Control (MPC) described more in detail later.

An example of APC implementation is described by Howes et al.6 for a lubrification of oil process. The system consists of 12 manipulated variables, 28 controlled variables and 11 feedforward controls. By means of Pitops software developed by Pi Control the plant has increased rate of production of 5% saving about 1.3 M€. The software, indeed, gives the parameters of the system in 10 minutes basing on historical data without step tests. Other examples are Canada’s Yara Belle PlaineInc. and South Korea’s LG Petrochemical Corp. The first applied APC techniques to a nitric acid plant reducing methane emissions by 25% maintaining high temperature combustion. While the latter, applied to a naphtha cracking, allows to improve yield by 5%, reducing energy consumption in cold side by 8% and saving 100.000 $/y.[24]


The precursor of MPC is represented by LQG (Linear Quadratic Gaussian) developed by Kalman in 1960s, but the first MPC generation appeared in 1970s with IDCOM (developed within ADERSA) and DMC (developed within Shell Oil). Nowadays we have reached the fifth generation where Honeywell, AspenTech and Shell dominate the markets[25]. The MPC is suitable to describe the behaviour of MIMO (Multi-Input, Multi-Output) processes.

As can be seen in Figure 6 a classical plant control provides for different hierarchical levels[26]: a plant wide-optimization, a local economic optimizer and a dynamic constraint control. Usually this is done by several PID controls lead-lag (L/L) blocks and high/low select logic. With MPC, shown in more detail on the right end, this can be achieved with better results by taking an action on the difference between actual and predictive value (residuals).11 ue (residuals).11

Comparison between a PID and MPC controller
Figure 6–Comparison between a PID and MPC controllers.[27],[28]
    Commercial MPC software

Nowadays the main MPC software includes: DMCplus developed by AspenTech, SMOC of Shell Global and RMPCT of Honeywell. The following is a brief description of these, for more details see Lahiri[29].

DMCplus derives by the fusion of Dynamic Matrix Control (DCM) and Setpoint Multivariable Control Architecture (SCMA). The software is composed by several packages and allows to simulate Finite Impulse Response (FIR), linear Multi-Input, Multi-Output (MIMO) and nonlinear Multi-Input, Single Output (MISO) State Place. Recently AspenTech has introduced the Adaptive Process Control that reduces the possibility to have “flipping” behaviour of the plants due to the difference between the plant and the controller model. This package, indeed, forced the system to work in an optimum area instead of optimum setpoint. In this way, the performances of the plants aren’t compromised. The model, also, is implemented by using historical data of the process and adjusting online the parameters with an adaptive model, saving time.

SMOChas been used in more than 430 applications such as crude distillation, hydrocracking, styrene etc.

 It includes several packages:

  • AIDAprois an offline software that analyzes data from open and closed loop defining the mathematical model to be used by taking into account unmeasured disturbances.
  • MDprois an offline software that allows to control base and multivariable control loops by means of statistical methods.
  • RQEprousesa Kalman filter that allows to adapt the model on process conditions. It gives parameters for online process reducing maintenance and increasing stability of it.
  • SMOCprois used for multivariable control system optimization. This package uses a grey-box approach where input and output are defined by means of intermediate variables. In this way the engineers can take a cascade correction optimizing the response of the process.
  RMPCT manages process with huge errors and large interactions between controlled variables. The controller doesn’t follow a specific trajectory but can move through any trajectory within the constrains defined by a “funnel”. At the same time, the controlled variables aren’t forced to keep the set points, but can change in a range. In this way disturbances are rejected and the control is optimized.   Smart Control

In the era of digital devices, “smart control” for chemical and petrochemical industry can have a key role in reducing costs, saving materials and increasing production rates. The idea is to create intelligent networks where flowsheet and variables are optimized in real time.[30] Emerson represents the leading company in the sector providing smart solutions both for old and new refinery such as electronic marshalling and HART (Highway Automated Remote Transducer) protocol. The former allows to eliminate cross-wires, to reduce the space occupied and the time for add new I/O interfaces[31]; the HART protocol, instead, matches the characteristics of analog and control system removing repetitive problem and predicting unexpected failures.[32] Several companies have implemented smart controls such as Chevron/PDVSA in Petropiar refinery saving 70 M$ in two years, reducing by 40% cost for pre-commissioning and commissioning, and 60% the losses due to instruments faults.[33] In China, Sinopec launched four pilot plants (Jiujiang, Zhenhai, Maoming, and Yanshan) using advanced control and online optimization. This allowed to increase profits of about 10% (i.e. at Yanshan and at Maomingprofits increased of 25.12 million of CNY and 41.94 million of CNY respectively).30


Process Control is very common in refineries and chemical plants. It was used for the first time in 1920-1930s and today is essential to respect product quality, safety and reliability of the processes. Despite the progress of technologies, 85-95% of feedback control loops are based on PID controllers and the main system controls are dated in 1985. The value of technologies that has reached the end-life and with more than 20 years is about 65US$ billion dollars and 53 US$ billion dollars respectively.33 Therefore, several tuning optimizations such as Artificial Intelligent Methods (Genetic and Differential Evolution Algorithms) together with Advanced Process Control (APC) have been described. Furthermore, some examples of the advantages offered by the implementation of APC have been shown and the latest software for Model Predictive Control (MPC) have been illustrated such as DMCplus, SMOC and RMPCT. In this scenario the “smart control” in chemical and petrochemical plants can have a pivotal role in reducing costs increase profits and create safer plants.  The current estimate provides that plant’s profit margin can improve of about 10-20% while emissions can decrease of about 70%.

[5] S. R. Mohan, Five Best Practice for Refineries: Maximizing Profit Margins Through Process Engineering, Aspen Technology, 2016.
[11]D. E. Seborg, ProcessDynamics and Control, Fourth Edition, Wiley 2016.
[14]A. Datta, Advances in Industrial Control: Structure and Synthesis of PID Controllers, Springer-Verlag London 2000.
[21]Lahiri, S. K. (2017) Introduction of Model Predictive Control, in Multivariable Predictive Control: Applications in Industry, John Wiley & Sons, Ltd, Chichester, UK. doi: 10.1002/9781119243434.ch1
[22]B.R. Mehta and Y. J. Reddy, Industrial Process Automation Systems: Design and Implementation, Butterworth-Heinemann 2014.
[23] J. Moyne et al., Run-to-Run Control in Semiconductor Manufacturing, CRC Press LCC, 2001.
[25]Lahiri, S. K. (2017),Historical Development of Different MPC Technology, in Multivariable Predictive Control: Applications in Industry, John Wiley & Sons, Ltd, Chichester, UK. doi: 10.1002/9781119243434.ch3
[26] S. Joe Qin, A survey of industrial model predictive control technology, Control Engineering Practice, 11,pp 733–764, 2003.
[29]Lahiri, S. K. (2017) Commercial MPC Vendors and Applications, in Multivariable Predictive Control: Applications in Industry, John Wiley & Sons, Ltd, Chichester, UK. doi: 10.1002/9781119243434.ch16

Practice and Technology and Measures For Improving Energy Efficiency in the Chemical and Petrochemical Sector

Author: Giovanni Franchi-Chemical Engineer – PhD Student –University UCBM – Rome (Italy)

1.Theme Description

Energy use grew up from 4.6 Mtoe[1] in 1973 to 13.4 Mtoe in 2012.Total final energy consumption decreased in Europe while it increased in non-OECD countries, reaching a further 1.3% in 2014 (i.e China 3.1% and 4.3% in India).[2],[3]

Figure 1 shows World Energy consumption for OECD and Non-OECD country from 1990 to 2040. As can be seen from 2010 up to 2040, it will grow of 56% from 524 quadrillion of BTU to 820 quadrillion of BTU. The industrial sector will consume more than 50% of the energy in 2040 and this energy will be produced for 80% from fossil fuel.

Figure 1 - World Energy Consumption from 1990 up to 2040.[4]

In this scenario Chemical and Petrochemical sectors contribute to a large part of the Industry energy consumption (~ 30% including feedstocks)[5]. Therefore, in the following section, Best Practice Technologies (BPT) that allow to save energy and reduce COemissions are described.


2.Energy Consumption in Chemical and Petrochemical Sectors

  The energy consumption from Industry reached 29% of final energy consumption in 2012 and Chemical and Petrochemical sectors are the largest energy users with 35 EJ[6] (see Figure 2), contributing to about 7% of the global CO2 emissions.[7]     fig.02  
Figure 2 - Energy consumption by sector (figure 1) and Industrial Energy Consumption by sector (figure 2) (2)
  The main energy consuming processes are steam cracking, ammonia production from natural gas and coal, extraction of aromatics, methanol and butylene that accounts for about 70% of the consuming.5 The energy efficiency in these sectors has been started since 1970s after oil crisis. Table 1 and Table 2 show some of the possible measures to increase energy efficiency. In particular, Table 1 refers to the main equipment used in the processes,while Table 2 refers the production of specific chemical compounds.  
 Equipment, Steam Distribution and   Controls   Measures to increase Energy Efficiency
  • Pretreatment of boiler feed water.
  • Flue gas analyzer (it improves efficiency and reduce NOx).
  • Reduce flue gas amount due to leaks in the boiler.
  • Reduce excess air.
  • Improve insulation.
  • Maintenance (i.e. antifouling and antiscaling).
  • Recover heat (i.e. flue gas and blowdown).
  • Fouling prevention by means of temperature control, regular maintenance and cleaning, inhibitors and surface coating.
 Steam Distribution
  • Insulation (low thermal conductivity, resistance to water adsorption, combustion and temperature change).
  • Steam trap (i.e. maintenance, recovery flash steam).
  • Recovery of hot condensate.
 Electric Motors (pump, compressor and fun)
  • Follow standard of NEMA (USA) or IEC (EU).
  • Use variable speed drivers.
  • Pump/motor alignment check.
  • Correct size.
  • Use multiple pumps.
  • Replace V-belts with cog belts.
  • Keep motors and compressor lubricated and cleaning.
  • Use filter to prevent entry of contaminants.
  • Optimize the reflux ratio.
  • Reduce purity when is not necessary in this way the reboiler duty decreases.
  • Replace trays with new ones.
  • Replace old column with Divided Wall and Heat Integrated columns.
 Control system
  • Mathematical (“rule-based”).
  • Neural Network (“fuzzy-logic”).
  • Artificial Intelligent.
Table 1 - Methods to Improve Energy Efficiency by referring to specific equipment (for more detail see[8],[9]).
 Chemical Compounds Production  Measures to increase Energy Efficiency
  • Sulphur-based inhibitor (reduce coke formation in the coil).
  • Improve furnace coils (i.e. ceramic or ceramic coated).
  • Integration with a gas turbine.
  • Use of high-temperature quench oil towers.
  • Reduce pressure drop in compressor inter-stage.
  • Improve energy recovery.
  • Use power and steam from cogeneration.
  • Production of low pression steam (i.e. using exothermic heat of the reaction).
  • Gear pump and/or extruder.
  • Re-use of solvent, oils and catalysts.
  • Use of steam condensate instead of low pressure steam.
Table 2 - Methods to Improve Energy Efficiency by referring to specific compounds (for more detail see 8).

2.1 Applications of Emerging Technologies

The main chemical and petrochemical processes (i.e. steam cracking, ammonia production etc.) use catalysts to enhance the velocity of specific reaction increasing the yield. The IEA in collaboration with International Council of Chemical Association (ICCA) and DECHEMA estimated that improvement of catalysts and related processes could reduce energy consumption of 20-40% in 2050.[10]

Recently new processes have been developed to produce these compounds at lower costs:
  • Methanol to Olefin (MTO), uses synthetic gas instead of crude oil. UOP and Norsk Hydro (now Ineos) developed a MTO process that allows to increase the yield of ethylene and propylene reducing by-product and catalyst consumption.[11]This process has been tested at semi-commercial scale by Total Petrochemical in Belgium.
  • Hydrogen Peroxide Propylene Oxide (HPPO), produces propylene oxide by the reaction of hydrogen peroxide and propylene. The process saves about 10-12% of energy (included hydrogen peroxide production) compared to conventional processes10 avoiding by-products such as propylene dichloride and styrene monomer. One of the biggest commercial plant (300,000 t/year) is in Belgium based on BASF/Dow chemical technologies.[12]
  • Gas to Liquids (GTL),where natural gas is converted into liquid fuels such as naphtha, kerosene, diesel etc.[13] Nowadays there are five commercial plants developed by Shell (Malaysia and Qatar), Sasol (South Africa) and joint venture between Sasol and Chevron (Qatar). These plants have a capacity between 2,700 bbl/d up to 140,000 bbl/d and high investment costs[14] (i.e. Shell cancelled a plant in Louisiana due to the jump of the price from 12.5 to over 20 B$[15]). Therefore recently, small GTL plant shave tested. A commercial plant was realized in Brazil by Petrobras and CGTL. It produces 200,000 scf/d and it costed 45US$.[16]

2.2 Indices to evaluate Best Practice Technologies (BPT)

Nowadays two terms are used to group the most efficient technologies used in the processes:
  • BPT, means Best Available Technologies and refers to most advanced technologies economically available at industrial scale.
  • BAT, stands for Best Available Technologies more technologically advanced, but not always economically suitable.
In some cases, the two terms coincide. In the chemical and petrochemical sectors usually refer to BPT.5,[17]

The International Energy Agency (IEA) in the reporton: “Chemical and Petrochemical: Potential of Best Practice Technology and other measures for improving energy efficiencies” has defined two different indices for Energy Efficiency and CO2 savings.

The former is the ratio between the sum of the minimum energy associated to each process and total energy use by chemical and petrochemical processes (Table 3). The last takes into account only direct emissions excluding that related to electricity, use and waste treatments (Table 4).

The value of both indices is function of the approach used. In both top-down and bottom-up approaches the energy efficiency is the ratio the potential performance of the sector under BTP and the current performance. However, in the top-down approach the BPT values are scaled by a coverage factor set equal to 0.95 for all country. While for bottom-up approach this value is specific for each country. The coverage factor takes into account that not all processes are considered. In the table 3 are shown the results for 57 processes and 66 chemical products. Considering electricity, the improvement potentials reaches 20%.5

Country TFEU[18] [PJ/y] (BPT)T-D[19] [PJ/y] (BPT)B-U[20] [PJ/y] (EEIj)T-D[21] [%] (EEIj)B-U[22] [%] IT-D[23] [%] IB-U[24] [%]
USA 6412 4851 5713 75.6 89.1 24.4 10.9
China 4301 4459 3397 103.7 79.0 -3.7 21.0
Germany 1064 1048 931 98.5 87.5 1.5 12.5
India 1096 1113 893 101.5 81.4 -1.5 18.6
France 627 556 563 88.7 89.9 11.3 10.1
Italy 389 348 344 89.5 88.5 10.5 11.5
World 31,529 26,544 26,898 84.2 85.3 15.8 14.7
Table 3 - Improvement potentials of main Countries in 2006 (excluding electricity) (6)

The top-down approach underestimates the improving potential for China and India leading to a negative value. While bottom-up approach leads to coverage factor, for some country, more than 100%. Therefore, both methods have critical elements due to overestimation of the process. Indeed, heat cascading and co-generation are neglected.


 Direct CO2 Emissions

 [Mt CO2/y]

 (CO2)index-mix[25]  (CO2)index-NG[26]
T-D [%] B-U [%] T-D [%] B-U [%]
 USA 278 0.63 0.81 0.51 0.67
 China 148 1.03 0.50 0.47 0.07
 Japan 111 0.80 0.87 0.53 0.59
 Germany 42 0.95 0.74 0.63 0.46
 France 27 0.79 0.80 0.52 0.53
 Italy 12 0.73 0.70 0.43 0.40
 World 1,255 0.65 0.66 0.50 0.51
Table 4 - CO2 savings for main countries in 2006 (5)
    The CO2 savings is equal to:
  • 20-37% with the actual fuel mix and 37-57% with natural gas, for a top-down approach;
  • 19-50% with actual fuel mix and 33-60% (excluding China) with natural gas for a bottom-up approach.

Finally, in the figure 3 is shown the energy saving potential with BPT and other options such as co-generation, recycling, energy recovery etc.For chemical and petrochemical sectors, the energy saving potential with BPT amount to 120-150 Mtoe/year and 370-470 MtCO2/year.7


Figure 3 - Comparison between energy saving potential.[27]


The Chemical and Petrochemical sectors are the largest energy users within industrial sector and they reached 30% of final consumption in 2012. There are several measures to improve energy efficiencies (Table 1 and Table 2) and some of emerging processes are Methanol to Olefin (MTO), Hydrogen Peroxide Propylene Oxide (HPPO) and Gas to Liquid (GTL). The International Energy Agency (IEA) has defined two indices to evaluate the Energy Efficiencies and CO2 potential savings by applying Best Practice Technologies (BPT). This term groups the most advanced technologies economically available at industrial scale. The value of these indices depends on the approach used: top-down or bottom-up. The two methods lead to different results but both in some cases overestimate or underestimate the improvement potential. Therefore, it is necessary to consider more data and associate BPT with co-generation, recycling energy and the use of biomass feedstocks. IEA in collaboration with International Council of Chemical Association (ICCA) and DECHEMA, also, define four pathways to be followed in the future: improve feedstock energy (i.e. production of synthetic gas from several raw material), fuel form gas and coal, New routes to polymer (i.e. saccharification of lignocellulose into bioethanol) and hydrogen production (i.e. from biomass, waste material, improve of water electrolysis etc.).[28]

[1]Mtoe = Million tonnes of equivalent Oil.
[2]S. Fawkes et al., Best Practice and Case Studies for Industrial Energy Efficiency Improvement, An Introduction for Policy Makers, Copenhagen Centre of Energy Efficiency, 2016.
[5]D. Saygin et al.,Chemical and Petrochemical Sector: Potential of best practice technology and other measures for improving energy efficiency, OECD/IEA, 2009.
[6]D. Saygin et al., Potential of best practice technology to improve energy efficiency in the global chemical and petrochemical sector, Energy 2011, 36, pp 5779-5790.
[7]M. Hagemann et al.,Development of sectoral indicators for determining potential decarbonization opportunity, Ecofys and Institute of Energy Economics, Japan 2015.
[8]Maarten Neelis et al., Energy Efficiency Improvement and Cost Saving Opportunities for the Petrochemical industry, An ENERGY STAR®Guide for Energy and Plant Managers, Energy Analysis Department Environmental Energy Technologies Division Ernest Orlando Lawrence Berkeley National Laboratory University of California, 2008.
[9]Yeen Chan et al., Study on Energy Efficiency and Energy Saving Potential in Industry and on Possible Policy Mechanisms, ICF International, 2015.
[18]TFEU = actual total final fuel and steam use of a country reported in IEA energy statistics, including feedstocks;
[19](BPT)T-D= specific final energy consumption under Best Practice Technology for a Top-Down approach. This value is scaled according to coverage factor (to take into account that some processes have not been considered)assumed equal to 0.95.
[20](BPT)B-U= specific final energy consumption under Best Practice Technology for a Bottom-Upapproach. This value is scaled according to coverage factor set equal to: 0.82 for the USA, 1.26 for China, 1.20 for India, 1.08 for Germany, 0.95 for France and 0.97 for Italy.
[21](EEI)T-D = Energy Efficiencies Indicators for a Top-Downapproach.
[22](EEI)B-U = Energy Efficiencies Indicators for a Bottom-Up approach.
[23](I)T-D = Improvement Potential (1-(EEI)T-D) for a Top-Down approach.
[24](I)B-U= Improvement Potential (1-(EEI)B-U) for a Bottom-Up approach.
[25] (CO2)index-mixevaluates the CO2 saving under BPT by means of the same fuel mix in 2006. Referring for example to EU in 2014 fuel mix consist of: Electricity (56%), Gas (32%), Solid Fuel (5%), Total Petroleum Product (4%) and Other (3%). (ref. 9)
[26](CO2)NGevaluates the CO2 saving under BPT by means of natural gas.

Research Highlights in New Catalytic Technologies

Author: Mauro Capocelli – Researcher –University UCBM – Rome (Italy)

1.Theme Description

Catalysts are compounds use for increasing the velocity of a specific reaction by reducing the activation energy.[1] This brings down temperature/pressure of the processes saving fuel. Catalysts can be homogenous or heterogenous depending on the phase involved in the reactions (i.e. heterogenous catalysts usually are solid while the reagents are liquid or gaseous).[2] These substances are not reduced by the reactions, but over time catalytic activity and selectivity decrease due to phenomena such as poisoning, fouling coking, carbon deposition and sintering.[3] Therefore regeneration is necessary. In 2014 the global market for catalysts and catalysts regeneration reached 24.6 Billion of US$[4]and it is estimated to achieve 34.3 Billion of US$ in 2024.[5]

Figure 1– Common catalysts use in refining and petrochemical processes[6]
    Nowadays the main companies are: Haldor Topsoe, UOP, Johnson Matthey, Süd-Chemie, BASF, Exxon Mobil Chemical and so on.[7] In the following sections catalyst technologies in chemical and refining sectors are described. In addition, new trends in the development of catalysts are illustrated.  

2.Catalysts in Industrial Processes

Catalysts are widespread into Industrial Processes, from chemical to refining sectors, for the production of several compounds.  

2.1 Chemical Sectors

In the chemical sectors catalysts are used to get:
  • Xylenes a mixture of aromatic hydrocarbon molecules obtained from petroleum naphtha and to lesser extent from pyrolysis gasoline (by-product from ethylene plant) and coal liquids from coke.[8] The mixture is rich in m-xylene (50-60%), but the most important is para-xylene (20-25%)[9] used for the production of polyester fibres[10], resin and films. Referring to p-xylene is usually obtained by crystallization or adsorption on molecular sieve processes,[11] but it’s difficult to separate it from other isomers (very close boiling points). Therefore, toluene disproportionation and methylation are also used. Both technologies exploit zeolite catalysts (i.e. ZSM-5) and allow to have high selectivity to p-xylene. Toluene disproportion produces, from two molecules of toluene, one of xylene and one of benzene while the toluene methylation produces, from the reaction of toluene and methanol, water and xylenes.[12]
  • Ethylbenzene is used for the production of styrene, a chemical compound employed to synthetize thermoplastic polymers and elastomers (8) Usually, ethylbenzene is obtained by means of alkylation where ethylene reacts with benzene on acid catalysts. Transalkylation is also used to improve process yield by converting polyalkylbenzenes (PBE), a by-product, into ethylbenzene.[13]Figure 2illustrates the process developed by Polimeri Europa (now Versalis). The system uses the proprietary zeolite Beta-based catalysts: PBE-1 for the alkylation section and PBE-2 for the transalkylation section.[14]

fig.02 bis

Figure 2 - Production of Ethylbenzene from Polimeri Europa (Versalis since 2012)(8)
  • Cumene is used for phenol and acetone production. It is obtained from the catalytic alkylation of benzene with propylene. The most common catalysts are based on Solid PhosphoricAcid (SPA)housed in fixed-bed reactors operating at 180-240°C and 3-4 MPa. However, the release of free acids causes problems of corrosion.8Hence, new zeolite-based catalysts have recently been launched. The UOP, for example, has developed the Q-MaxTM Process[15] shown in Figure 2, where zeolite catalysts are used for both alkylation and transalkylation reactors. These compounds are regenerated after three cycles.
Figure 3 - Q-MaxTM Process developed by UOP15

In the process, a depropanizer and diisopropylbenzene (DIPB) column are used. The former allows to remove propane from alkylation reactor effluent while the last separates DIPB from heavy aromatics. A transalkylation reactor, in which DIPB reacts with benzene, is also used to improve the yield of cumene.

2.2 Refining

Catalyst technologies are used in refining processes such as:

  • Hydrotreating used to remove sulphur, nitrogen, oxygen olefin and metals from distillate fuel such as naphtha, diesel, kerosene by means of hydrogen at high pressure and temperature by means of catalysts. These cylindrical catalysts are metal oxide-based (NiMo,CoMo or MoO3, WO3) on alumina supports.[16] Table  1 summarizes the main physical properties.
Table 1- Physical Property of NiO/CoO and MoO3/WO3 catalysts (values from[17])
  • Catalytic Reforming is used to transform heavy naphtha in gasolinewith high octane ratings by means of fixed bed reactors. Catalysts can be platinum-based or mixtures of it on alumina support (Pt/Al2O3, Pt-Re/Al2O3, Pt-Ti/Al2O3).[18] Before catalytic reforming the feed is hydrotreated to remove sulphur and nitrogen that poison the active catalyst sites.
  • Isomerization of light naphtha allows to improve octane ratings of C5,C6 hydrocarbon up to 10-20 times. Chlorided alumina, zeolite, and sulfated oxide are the most common catalysts. The first one has high activity and high isomerate yields, but is sensible to poisoning, hence chloride addition is necessary. Zeolite and sulfated oxide can be regenerated but has less activity and require high H2/hydrocarbons ratios.[19]
  • Synthetic Fuels are obtained from syngas, a mixture of CO+H2. In Fischer-Tropsch Synthesis, syngas is converted into hydrocarbons blends that are further refined to produce gasoline. The process uses transition metal catalysts such as iron or cobalt. In the presence of iron catalysts, the water produced from the reactions are converted in CO2 e H2 by means of water gas shift reaction. The temperature and operating pressure are between 200-350°C and 20-50 bar respectively. Syngas is also used to produce methanol by means of catalysts such as Cu/ZnO/Al2O3 at 225-275°C and 50-100 bar[20].ExxonMobilhas developed a process to convert methanol to gasoline (MTG), as shown in the Figure 4, methanol is vaporized and introduced into a DME reactor. The effluent, a mixture of methanol/DME, is sent to MTG reactors in which is completely dehydrated by means of own catalysts, producing gasoline. Gasoline enters Deethanizer and Stabilizer reactors where fuel gas and LPG fractions are removed. Then, stabilized gasoline is split into light and heavy gasoline; the last stream is treated to reduce the amount of Durene.
 fig.04 bis
Figure 4- Methanol to Gasoline ExxonMobil Process[21]
  • Catalytic Dewaxing is a process that improves cold flow of middle distillate feedstocks. Commercially there are two different configurations depending on the catalysts used. For example, if catalysts are based on Ni, Co/Mo or Ni/Mo, a single stage is adopted. The stacked beds of hydrotreating(to remove sulphur and nitrogen) and dewaxing are placed on the top/bottom of the reactor according to the raw materials. Whereas, in the presence of noble-metal catalysts, a double stage is used because a severe hydrotreating is required. The former consists of stacked beds of hydrotreating while the last is a dewaxing stage[22]. Shell has developed a proprietary catalyst formulation (SDD),for dewaxing stage, that allows to remove “wax” converting it in isomerized and cracked molecules. The single stage uses SDD-800 that reduces the loss of distillate and increases the activity of the catalyst before regeneration. The catalyst can operate under high concentration of H2S and NH3. The double stage, instead, adopts SDD-821 a noble metalcatalyst that increases yield, but require slow percentages of H2S and NH3.[23]

3.R&D in Catalytic Technologies

Catalysts allow to reduce the temperature/pressure of the reaction decreasing the amount of fuel, feedstock and expensive materials involved in the processes. Therefore, is crucial to develop new catalysts and optimize the existing ones. When a new catalyst is synthesized, the first step is to select the chemical elements by means of mathematical algorithms and discard thosewho are not suitable. For example, choosing 50 chemical compounds the possible combination are thousands from 1,255 for binary up to 230,300 quaternary combination[24]. Before commercialization, the synthesised catalyst is tested on laboratory scale and then into a pilot plant under different operating conditions. The reactors (fixed, fluidized bed reactors etc.) used in the experimental tests affect the shape and texture of catalysts (pellets, spherical, granular particles etc.). In the following section, for example, the most recent catalysts developed by BASF and Clariant are described:

  • FortressTM NXT is the catalyst worked out by BASF for Fluid Catalytic Crackyng (FCC). It allows to increase metals passivation reducing coke and hydrogen production.[25]
  • PolyMax 850 is launched by Clariant. It is a new phosphoric acid that converts olefins to gasoline and solvents. The catalyst allows to reduce CO2 emissions of about 100,000 tons compared to common ones. It can be recycled and use as fertilizer or phosphoric substance.[26]
Figure 5 - (A) FortressTM NXT (25), (B) PolyMax 850 (26)


From the first large-scale plants for the production of sulfuric acid in 1875, catalysts have been a fast diffusion in the industrial processes.[27] In the chemical sectors they are used for the production of several compounds such as xylene, ethylbenzene, cumene and so on. In refining processes, they are used in hydrotreating, catalytic reforming, isomerization synthetic fuels, catalytic dewaxing etc. Nowadays about 80-90%24 of chemical processes adopt catalysts (mainly heterogenous catalysts) and the global market for the production/regeneration reach billions of US$. Therefore,is necessary to develop new catalysts and optimize the selectivity and activity of existing ones by reducing the deactivation processes.

[9] C. Perego and P. Pollesel, Chapter 2 - Advances in Aromatics Processing Using Zeolite Catalysts, Advanced in Nonporous Material 2010, 1, pp 97-149.
[12]M. T. Ashraf, Process of pXylene Production by Highly Selective Methylation ofToluene,Industrial Engineering Chemical Research2013, 52 (38), pp 13730–13737.
[13]I. M. Gerzeliev et al., Ethylbenzene Synthesis and Benzene Transalkylation with Diethylbenzenes on Zeolite Catalysts, Petroleum Chemistry 2011, 51(1), pp39-48.
[16] S. Parkash, Refining Processes Handbook, Gulf Professional Publishing, 2003.
[17]J.Ancheyta and J. G. Speight, Hydroprocessing of Heavy Oils and Residua, CRC Press Taylor & Francis Group 2007.
[19] G. ValavarasuCorporate R&D Centre, Hindustan Petroleum Corporation Limited , Bangalore , India and B. Sairam,R&D Centre, Chennai Petroleum Corporation Limited , Chennai , India Light Naphtha Isomerization Process: A Review, Journal Petroleum Science and Technology 2013, 31(6), pp 580-595.
[22]C. Perego et al., Chapter 19-Naphtha Reforming and Upgrading of Diesel Fractions, Zeolites and Catalysis
Synthesis, Reactions and Applications,edited by Jiri Cejka et al., WILEY-VCH 2010, pp 585-622.
[24] M.Baerns and ‎M.Holeňa, Combinatorial Development of SolidCatalytic Materials, Design of High-Throughput Experiments,Data Analysis,Data Mining, Imperial Collage Press 2009.
[27]C. H. Bartholomew and R. J. Farrauto, Fundamentals of Industrial Catalytic Processes, Second Edition Wiley-Interscience, 2006.

Latest Advances in Computational Chemistry for Petroleum and Petrochemical Processing

Author: Marcello De Falco – Associate Professor –University UCBM – Rome (Italy)

1.Theme Description

The use of software for the solution of complex problems is dating in 1960s. Since then, computational chemistry grew up quickly by means of increasingly powerful computers:[1]

  • in 1966 Simulation Science launched PROCESS a program for simulating distillation columns;
  • in 1969 DESIGN, a flow-sheeting program for oil and gas processes, was commercialized;
  • in the 1970s FORTRAN became the programming language of engineers;
  • in 1976 the US Department of Energy of Massachusetts commercialized the simulation program ASPEN;
  • In 1982 was launched the first Personal Computer, IBM 5150;

Since 1990s PC programs have played a key role and nowadays are widespread in petroleum and petrochemical processing. In the following section the basis of computational chemistry and the principles of the main commercial software are described.


2.Computational Chemistry

Computational Chemistry is part of the chemistry that uses mathematical models to be simulated on the computers:

  • determining the physical properties of streams;
  • improving the efficiency of the processes by means of sensitivity analysis;
  • describing new compounds and materials.

The methods on which models are based can be divided in: Classical Computational Methods and Computational Quantum Chemistry.


2.1 Classical Computational Method

These methods are based on the law of classical mechanics and include:

  • Molecular Mechanism (MM)[2] describes the molecules as a collection of balls held together by springs. The balls represent the atoms while the springs the chemical bonds. The model minimizes the molecular potential energy to find bond lengths, angles and dihedrals. Often is called force field method and allows to describe molecules with thousands of atoms.
  fig. 01rappres of a molec  
Figure 1 - Schematic Representation of a Molecule (2)
  • Molecular Dynamics (MD)[3] describe vibrational/Brownian motion of a molecule. The momentums and forces of each atom are obtained by choosing the initial position and velocity of them. Then, new positions and velocity of molecules are processed by the information obtain in the previous step. The trajectory, the energy levels and conformation of substances are computed by iterating the algorithm. This method is suitable for protein application.
  • Monte Carlo Simulation (MC)[3] unlike molecular dynamics, this method is not deterministic, but is based on statistical distribution. Indeed, after choosing the initial position of the atoms and computing the energy of the system, the movement of itis selected randomly. The new configuration is accepted if the system reproduces a Boltzmann distribution. Otherwise another trajectory or the previous position are used until the system is balanced.

2.2 Computational Quantum Chemistry

These methods are based on the law of quantum mechanics and include:
  • Ab Initio[4], solves the Schrödinger equation giving the position of atoms, the electronic energy and density. It is based on the method of Hartree–Fockthat doesn’t take into account the electron correlation. Therefore, it can be used only in few cases. Several methods have been introduced to overcome this restriction (Moller-Plesset perturbation theory, Coupled Cluster, Multireference perturbation method sand etc.)
  • Semi-empirical Quantum Mechanism[5] treats only the valence electrons by ignoring some integrals. The errors due to the approximation are reduced by empirically parameters.
  • Density Functional Theory (DFT) is based on Hohenberg–Kohn theorem. It represents the total energy of the system as a function of the electron energy. In this way is possible to solve the problem by knowing of three coordinates instead of 3N coordinates of the electrons.4

A combination of Quantum Mechanism and Molecular Mechanism is used to describe reaction in a condensate phase. A small part of the system is treated with Quantum Mechanism that takes into account the new configuration of electrons due to chemical reactions. The rest is treated with Molecular Mechanism that allows to describe the molecular geometry.[6]


2.3 Computational chemistry in Industrial Processes

Process simulation started in the 1966 when Simulation Science launched the program PROCESS (today PROII) for the simulation of distillation columns. Nowadays is widespread due to the possibility to simulate steady state and dynamic. Steady state is used for equipment design, debottlenecking of plants while dynamic simulations are used to reproduce start-up, shout-down, disturbances, operability etc.[7]

The main software use in Industrial Processes are based on two techniques [8]:

  • Sequential Modular Approach (SM) divides the flowsheet in a series of block that need to be solved in series. In the presence of recycle streams a “tear stream approach” is used.8

fig. 02

Figure 2 - Sequential Modular Approach [9]

The tear stream approach gives an initial value to the stream; in this way the blocks can be solved sequentially. Then the initial choice is checked by an algorithm, until converge is reached. The method is suitable for steady-state simulation, but it is time-consuming for very complex systems.

  • Equation-Oriented Approach (EO) all the equations used in the software are solved simultaneously. It is suitable for object-oriented model approach and can simulate steady state (nonlinear, algebraic equation) and dynamics (differential equation).

fig. 03

Figure 3 - Equation Oriented Approach (9)

The combination of the two models (SM & EO) is called Simultaneous Modular Approach.      

2.4 Main Commercial Software

In this section, the main commercial software are listed:

  • AspenPlus is one of the packages developed by AspenTech[10]. It’s widespread in petrochemical and pharmaceutical processes. It has a database of about 5900 components from NIST. It can be integrated with cost analysis, heat exchanger design software and can be interfaced with Microsoft Excel by means of Visual Basic. It allows to make stead-state/dynamics simulation taking into account non-ideal and solid system.[11]
  • DESIGN II for Windows produced by WinSim Inc.[12] is suitable for petrochemical processes. Indeed, it includes more than 60 thermodynamics methods, 1200 components and 38 worlds crude oils. Other compounds can be added with ChemTran that allows also to calculate non-ideal property of mixtures. It’s automatic linked with Microsoft Excel, Visual Basic, Visual C++ interfaces and allows to use FORTRAN commands to define specific options.

fig. 04

Figure 4 - Refinery flowsheet with Design II for Windows[13]
  • SimSci PRO/II is owned by Invensys SimSci[14]. It allows to simulate steady-state processes in refining, polymerization and pharmaceutical applications. It performs rigorous mass and energy balance. Nowadays it includes also Spiral Crude Suite a package that features crude feedstock in detail. In this way more rigorous models are obtained.1
  • ChemCAD commercialized by Chemstations Inc.[15] includes several packages that allow to design new processes or improve existing ones. Indeed, wide thermodynamics data and unit operations cost are available. Furthermore, it’s possible to simulate steady state and dynamics such as operability of the plants, loops control, operator training etc.
  • gPROMS developed by PS Enterprise[16] is based on Equation Oriented approach. It allows to write differential equations, physical and chemical properties in the gProms model Builder. The resulting model is matched with experimental data to adjust the parameters. It can interface with Excel, Matlab and FLUENT environments. It is suitable to describe gas separation processes, crystallization, polymerization, fix bed reactors etc.11
  fig. 05
Figure 5 - gPromsprocess Builder (16)


Since 1960s computational chemistry (classical and quantum) has played a pivotal role in solving complex problems. Nowadays commercial programs are based on two mathematical models: Sequential Modular Approach (SM) and Equation Oriented Approach (EO). The SM is suitable for steady state solution, while the EO for dynamics processes and real-time optimization. There are several software (Aspen Plus, PRO/II, gProms etc.) that can reproduce the main petroleum and petrochemical processes; but despite there are more powerful PC, some simulations are time consuming. Therefore the future challenge is to reduce this time ever more and integrate different modelling components and environments through a standard interface (i.e. CAPEN-OPEN project[17]).

[1]A.Dimian et al., Integrated Design and Simulation of Chemical Processes, Volume 13 2nd Edition, Elsevier Science 2014.
[2]E. G. Lewars,Computational Chemistry, Introduction to the Theory and Applications of Molecular and Quantum Mechanics, Kluwer Academic Publishers, 2003.
[3]D. C. Young, Computational Chemistry, A Practical Guide for Applying Techniques to Real-World Problems, WILEY-INTERSCIENCE, 2001.
[5]W. Thiel, Semiempirical quantum–chemical methods, WIREs Computational Molecular Science 2013, 4, pp 145-157.
[8] D.C.Y.Foo, RafilElyas, Introduction to Process Simulation, Chapter1, Chemical Engineering Process Simulation
1st Edition, ICHEM 2017.
[11]R.Gani, Process Systems Engineering, 2. Modelling and Simulation, ULLMANN’S Enciclopedia of Industrial Chemistry, 2012.

Improvements and New Technologies for Corrosion Control in Industrial Process Installations

Author: Giovanni Franchi-Chemical Engineer- Cooperation Contract -University UCBM - Rome (Italy)

1.Theme Description

Corrosion is the destructive attack of a metal by chemical or electrochemical reaction with its environment. It’s called “anti-metallurgy” because it tends to bring the metals back to their state of being in nature, mixed with other elements (especially with O2). Deterioration by physical causes is not called corrosion, but erosion, galling, or wear[1],[2].There are different types of corrosion: uniform, pitting, crevice, intergranular, galvanic, etc., and are related to different sectors: infrastructure, utilities, production, manufacturing and transportation .Corrosion costs are due to lost production, health, safety and environmental issues. In the USA, referring only on direct costs, corrosion costs grew up from 276 billion US$ in 1998[3] to 1.1 trillion US$ in 2016.[4]

Table 1 reports the Global Corrosion Costs referring to 2013.

Table 1 – Global Corrosion Costs (2013)[5]

As can be seen, these costs reached 2.5 trillion US$ corresponding to 3.4% of Global Gross Domestic Product. The Nace International Institute has estimated that the application of techniques for preventing corrosion can save 375-875 billion US$ (15-35% of the total cost).[6]

The following sections described the most common types of corrosion in industrial processes such a soil and gas refining and corrosion due to water and in soil. Finally, methods to prevent and monitoring corrosion are described.


2. Corrosion in Industrial Processes


2.1 Corrosion in Oil and Gas Refining

Corrosion is widespread in oil and gas refining; indeed, refining processes works at high level of pressure and temperature. In addition, due to harmful fluids, specific corrosions (sulfidic corrosion, naphthenic acid corrosion, sour water corrosion etc.) are related.

The European Commission’s report on “Corrosion Related Accidents in Petroleum Refineries” highlights that the most sensitive equipment, in the 99 refineries analysed, is the distillation unit (23% of failures) followed by hydrotreatments equipment (20%); 17% of failures occurred in the pipeline for transport between units, 4% in tubes of heat exchanger and cooling equipment, 15% took place in storage tanks, whereas the rest involved other equipment component like trays, drums and towers.[7]

2.2 Corrosion in Water

Water is very aggressive natural electrolyte for many metals and alloys due to oxygen dissolved. Other elements that affect corrosion are: pH, chloride, Total Dissolved Solids (TDS), hardness and high temperature.

Langelier Saturation Index (LSI) is one of the most common index used to evaluate the water corrosion:

formula where
  • pHS = pH at saturation conditions.
  • LSI < 0 the water is corrosive and could damage metal surface.
  • -5 < LSI < -3, treatments are recommended.[8]
  Local corrosion is accelerated by the presence of nitrates and nitrites.[9]

2.3 Soil Corrosion

Soil corrosivity depends on electrical conductivity, oxygen concentration, salts and acids content. It’s common in storage tanks, cables and pipelines. Soil aeration is a well manner to reduce corrosion because the ground has higher rates of evaporation and lower water retention.[10]


3.Methods for Corrosion Reduction, Measurement and Monitoring

As abovementioned, corrosion costs are very high. Therefore, it is necessary to prevent and monitor the corrosion development during equipment operation.

3.1 Corrosion Prevention

Corrosion could be reduced by using:
  • suitable materials, i.e. titanium alloys in heat exchanger and condenser tubes show a great resistance (Figure 1).[11]
Figure 1 – (a) tube bundle (Titanium Gr.12) of overhead vacuum condenser (b) carbon steel shell (c) detail of inlet nozzle corroded by acids gases (11)
  • cathodic protection where the metal that needs to be save is transformed into the cathode in an electrochemical reaction or cell. It’s used to control corrosion in marine environments, but it can’t prevent MIC (Microbiological Influenced Corrosion)[12]. It’s also very common in soil corrosion prevention10;
  • protective coatings like fiberglass-reinforced plastics (FRP). They combine the properties of resin (i.e. polyester, epoxy and vinyl ester) with that of glass fibers. The former allows the chemical resistance while the latter gives mechanical strength and resistance to external damage[13];
  • corrosion inhibitors are usually adsorbed on the surface of the metal forming a protective film.[14]

3.2 Corrosion Monitoring

There are several techniques for corrosion measurements and can be divided into Non- Destructive Techniques and Corrosion Monitoring Techniques[15].

Non- Destructive Techniques are used when it isn’t possible to remove damaged materials and include:

  • X-ray techniques use electromagnetic waves from 1pm to 10nm with energy between 0.1keV and 1MeV. There are different methods such as: X-ray fluorescence analysis (XRF), X-ray diffraction analysis (XRD) and X-ray photoelectron spectroscopy (XPS). The XRF and XPS are very similar, the X-ray energies let to some electrons to jump from the atom as photoelectrons. The generated holes are occupied by nearby shells electrons releasing energy. In the XRF the energy released is measured, while the XPS is measured the energy associated to photoelectrons. The XRD uses waves of 0.1 nm, corresponding to lattice spacing, that are scattered by the electrons in the atoms with a certain angle. By measuring this angle is possible to know the chemical composition of the element.[16]
Figure 2 – (a) XRF, (b) XPS where K,L,M represent the energy levels (16)
  • Ultrasonic Technique is an online technique that allows to analyse general and localized corrosion. The system consists of a transducer (a piezoelectric material), the object to be analysed and a liquid placed between them. When the piezoelectric material oscillates, a wave is transmitted into the object. By measuring the time that wave employs to go across the material it is possible to know the thickness. It can detect wall losses of about 0.1 mm.[17]
  • Eddy Current Technique, is used in thin materials (aircraft skin, sheet stock etc.). It uses the principle of electromagnetic inductions where by means of altering currents, eddy currents are induced in the material to be analysed. These currents induce, in turn, an alternating current in the sensor coil. The change of the two current fields let to measure the corrosion rate.16 New techniques have been studied as: Photoinductive Imaging (PI) and Pulsed EddyCurrent (PEC). The former uses an argon ion laser to generate eddy current obtaining a microscopic resolution. The latter uses low frequencies spectrum that allows to have information at different depths.[18]
  Corrosion Monitoring Techniques include:
  • Electrical Resistance (ER), an online method that measures the changing in electrical resistance of a conducting elements. Indeed, referring to the second Ohm’s law the resistance of an element is equal to:
form2 where
  • ρ = resistivity;
  • l = element length;
  • S = cross section area of the element.
  • If S decreases due to corrosion, the element resistance increases. By plotting corrosion loss over time, it is possible to work out the corrosion rate.[19] This method can’t be used in liquid metals and conductive molten salts.[20]There are different types of Electrical Resistance on market: wire loop, cylindrical, tube loop, spiral loop, large/small flush and atmospheric.
  • Linear Polarization Resistance (LPR) uses the first Ohm’s law:
form3 where
  • ΔV = difference voltage applied to the electrodes;
  • I = current between the electrodes.

Two or three electrode probes are inserted into the process system. A potential of about 20 mV is applied between the elements and current is measured. This method allows to monitoring general and galvanic corrosion and qualitatively local corrosion like pitting and crevice corrosion[21]. It’s suitable to evaluate corrosion rate in real time.[22]

  • Corrosion Coupons are small bars of same alloy or similar chemical composition of the equipment that is being monitored (i.e. mild steel, copper, stainless steel, nickel, etc.)[23]. They are introduced into the system through a side stream coupon rack.[24]There are several kinds of corrosion coupons: strip, rod, flush disc and disc (Figure 3). Corrosion coupon are certified by its serial number, weight in grams, dimensions, material and surface finish.[25]Referring to corrosion in water, as example, corrosion coupons are removed from coupon rack after 30-90 days and return to laboratory. Where the rate of corrosion is determined from loss of weight (mils/year)[26]. In this way, it is also possible to understand the type of corrosion that occurred.[27]
Figure 3 – Different configurations of corrosion coupons from CAPROCO27

3.3 New Approaches in Corrosion Control

Techniques described above are stand-alone methods for corrosion control that don’t allow to monitor corrosion in real time (Figure 4).   fig.4
Figure 4 – Differences between off-line, online and online measurements[28]
    In the last few years, with the development of automation and Distributed Control System (DCS) it could be possible to control corrosion in real time and optimize system productivity (Figure 5).   fig.5
Figure 5 – Example of corrosion monitoring integrated with other process variables (28)

However, problems of integrating corrosion measurements within DCS exist due to qualitative and not quantitative measurements (28). Therefore, they can’t be used as process variables that can be manipulated. At the same time there isn’t a method that can evaluate all different kinds of corrosion. Recently new multivariable corrosion transmitter[29] and wireless[30]systems have been developed, but further efforts are needed to reduce the risks of corrosion.

4. Conclusions

Corrosion control is a real problem for industrial processes. It covers all sectors and with reference to hazardous plants such as oil refining, it can create serious damage to environments and people (i.e. Sinopec Gas Pipeline Explosion)[31]. Several methods for corrosion mitigation (cathodic protection, protective coating etc.) and monitoring (eddy current techniques, corrosion coupons etc.) exist.  Despite this, corrosion causes trillion US$ losses. Nowadays these costs are 3-4% of Global Gross Domestic Production. Therefore, is necessary to control corrosion by integrating corrosion transmitters within DCS system (i.e. SmartCET)29 and equipping skilled professionals with the latest generation technologies.

[1] R. Winston Revie, Corrosion and Corrosion Control, An Introduction to Corrosion Science and technology, fourth edition, Wiley-Interscience, 2008.
[2] P. Pedeferri, Corrosione e protezione dei materiali metallici, Polipress, 2010.
[7]M.H. Wood et al., Corrosion‐Related Accidents in Petroleum Refineries, European Commission Joint Research Centre, 2013.
[9]B. Valdez et al., Corrosion Control in Industry, Chapter2, Environmental and Industrial Corrosion - Practical and
Theoretical Aspects, Intech 2012.
[11]A. Groysman, Corrosion Problems and Solutions in Oil Refining and Petrochemical Industry, Springer 2016.
[12]Günter Schmitt, Global Needs for Knowledge Dissemination, Research, and Development in Materials Deterioration and Corrosion Control, World Corrosion Organization, 2009.
[14]G.Camila, Corrosion Inhibitors – Principles, Mechanisms and Applications, INTECH, 2014.
[16]H. Kanematsu and D.M. Barry, Corrosion Control and Surface Finishing, Environmentally Friendly Approaches, Springer, 2016.
[17]S. Papavinasam, Corrosion Control in the Oil and Gas Industry, 1st Edition, Gulf Professional Publishing, 2013.
[21] L. Yang, Techniques for corrosion monitoring, Woodhead Publishing in Materials, 2008.
[28]R.D.Kane, A new approach to corrosion monitoring, Chemical Engineering,

New Materials for Emerging Energy Technologies

 Author: Giovanni Franchi-Chemical Engineer- Cooperation Contract -University UCBM - Rome (Italy)

1.Theme description

The European Commission since 2007 with the “Strategic Technology Plan” (Set-Plan) promotes the development of new technologies that allow to improve sustainability and efficiency, reducing costs. It can be achieved by coordinating the national research of European Countries and by financing projects.[1]

With Horizon 2020, EU gives the financial instrument to achieve these goals. Part of Horizon 2020 is the Leadership in Enabling and Industrial Technologies (LEIT)that supports the development of nanotechnologies, advanced materials, manufacturing and processing and biotechnology.[2]

In these context, the most promising energy technologies includes[3]:

  • artificial photosynthesis;
  • piezoelectric materials;
  • thermoelectric structural power materials;
  • low energy nuclear reactions.

The scopes of the innovative materials development is to reduce resources and energy consumption. Indeed, artificial photosynthesis could be used to produce energy from the sun without intermediate energy carriers(just a little part of 120 000 TW/year is use for mankind activities)[4]; thermoelectric generators could be used to convert waste heat into electricity (i.e. in the USA the amount of waste heat is about 36 TWh/year)[5].

In the following sections, the state of the art and the future trends of these technologies are described.


2.Technologies: State of Art and Future Perspectives


2.1 Artificial Photosynthesis

Artificial photosynthesis mimics the natural photosynthesis where chlorophyll uses sunlight to break down H2O molecules into hydrogen, electrons and oxygen. Hydrogen and electrons convert CO2 into carbohydrates, whereas the oxygen is expelled. In the artificial photosynthesis either oxygen and hydrogen could be produced. By this way, hydrogen could be used to produce energy, or to produce artificial fuels as methanol. The main problem of the process is splitting water molecules; the system need the use of catalysts like: manganese, titanium dioxide and cobalt oxide.[6]

Scientists are studying nanomaterials[7]and new processes[8]to improve efficiency. Today the artificial photosynthesis devices are not competitive with conventional energies equipment and tests are performed only in laboratory scale.

In the figures below two different devices are shown:

  • Photo-electrochemical biofuel cell;
  • Water splitting cell.


Figure 1 - a)Photoelectrochemical biofuel cell and b) Water splitting cell5

The first system uses sunlight to consume a biofuel (ethanol or methanol) and to generate hydrogen. The anode is a glass covered by a transparent conductor (indium tin oxide or fluorinated tin oxide) formed by a thin layer of nanoparticulate (tin dioxideor titanium dioxide). The electrode is immersed in an aqueous solution of NADH/NAD+. The energy absorbed generates electrons that flow through the cathode (i.e platinum electrode) immersed in the same solution, separated by means of membrane permeable to hydrogen’s proton (H+). Hydrogen or, if oxygen is present, electricity is produced. In the second system, the biofuel is substituted by an oxidant catalyst (IrO2∙nH2O) whereas the NADH solution is substituted by a ruthenium solution. The latter injects electrons on TiO2. These electrons flow through the cathode where hydrogen’s protons are reduced to hydrogen.


2.2 Piezoelectric Materials

Piezoelectric materials are widespread in our life. They are used in cars (fuel injection, airbag, parking sensors) in mobile phones (camera focus), at the hospital (microsurgery) in pressure sensors and transducers. When these materials are subjected to a mechanical stress they generate electric energy proportional to the stress. Vice versa when is applied an electrical field the piezoelectric produce a mechanical energy[9].


Figure - Common rail injector10

Nowadays piezoelectric materials can be divided into three groups:

  1. natural crystals (quartz);
  2. ceramics (lead zirconate titanate, PZT);
  3. polymers (polyvinylidene fluoride, PVDF).

Quartz has the highest quality factor (parameter that characterizes the sharpness of electromechanical resonance spectrum) suitable for loss transducers, whereas PZT has the highest electromechanical coupling factor (correspond to the rate of electromechanical transduction) and piezoelectric strain constant (measure the rate of strain due to an external electric field) suitable for high power transducer. PVDF has high voltage constant and mechanical flexibility, so it’s suitable for pressure/sensor applications[10].

The most used is the lead zirconate titanate (Pb(Zr,Ti)O3) and the challenge is to find new materials because this alloy contains 60% in weight of lead (expensive material).4


2.3 Low Energy Nuclear Reactions

In 1989, Stanley Pons and Martin Fleischmann demonstrated, in a small-scale laboratory, high release of heat, without radiation,by electrochemical charging of deuterium into palladium. This is called “cold fusion”. Nowadays cold fusion is included in the class of Low Energy Nuclear Reactions (LENR) and other materials have been found to produce the same effect(lithium and nickel).[11]

Instead of hot fusion, LERN necessities of solid materials and it doesn’t need a high flux of neutrons. The heat released is a function of deuterium concentration into palladium (this phenomenon is observed only if D/Pd> 0.9) hence a property metallurgy needs to be find.4A first nuclear reactor is under construction (ITER project[12])

The following table shows the main experiments and materials.

Electrochemical loading is mainly based on Pd/alloys with deuterons from heavy water because it is the system used in Fleischmann and Pons experiments. But, Ni/alloys with protons from hydrogen gas, are preferred for gas loading.


Table - LERN experiments[13]


One of the most promising experiments is Rossi’s E-Cat reactor. An external heat (electric or fossil) is applied in reaction chamber. The reactions begin when reactor temperature reached 60 °C and produce a large amount of heat (more than the energy input). This energy can be used to heat water and to produce steam. When the reaction is stable the external heat can be turned off and the reactions continue for hours. The first plant (1MWth) was tested in Bologna on October 28th, 2011. It ran for 5.5 hours producing 479 kWe.

It is being tested small E-Cat reactors, 10-20 kW, for domestic market (Rossi’s LeonardoCorporation).[14]


Figure - 1MWth E-Cat experimental apparatus[15]


2.4 Thermoelectric Generators

A thermoelectric system uses the Seebeck effect that allows to generate electrical power from a temperature gradient. The system consists of couples of semiconductors n-pconnected electrically in series and thermal in parallel. When a gradient temperature is applied, mobile electrons move from hot side (semiconductor n) to cold side (semiconductor p) where there are free holes. The net charge produces an electrostatic potential.


Figure - Thermoelectric Generators[16]


The efficiency is estimated by means of a dimensionless group (figure of merit):


  • α = Seebeck coefficient;
  • σ = electrical conductivity;
  • k = thermal conductivity;
  • T = absolute temperature.

Therefore, materials should have high Seebeck coefficient and electrical conductivity and small thermal conductivity.

Nowadays, materials used for this application are divided into three groups depending on the temperature[17]:

  1. bismuth telluride (Bi2Te3) at low temperature (< 400 K);
  2. lead telluride (PbTe) at middle temperature (600-900 K);
  3. silicon germanium (SiGe) at high temperature (> 900 K).

In the figure is reported the history of thermoelectric materials from 1960 up to now. There are three different regions:

  • ZT ~ 1 and efficiency reached 4-5%
  • ZT ~1.7 by the introduction of nanostructures and efficiency of 11-15%
  • ZT > 1.7 and efficiency near 15-20%.

The most useful between these materials is Bi2Te3 but this alloy is toxic for the environmental. For thisreason, alloys ofMg2Si, CoSb3, ZnSb,ZnO have been studied to find a new class of materials.4


Figure - History of thermoelectrical materials from 1960 to 201617



These technologies are part of low-carbon energy technologies and are well within European “2050 Energy Strategy”. This strategic plan aims to reduce greenhouse gas emissions by 80-95% compared to 1990 levels, by 2050.[18]

Further R&D efforts need to be made on new materials that could allow their commercialization. Indeed, regarding to artificial photosynthesis innovative materials and low-cost fabrication technique are introduced (i.e.hydrothermal and chemical vapor deposition)7. However, the experimental tests are carried out on laboratory scale. Piezoelectric materials are widespread, but new alloys with less lead content are necessary. LERN’s experiments are difficult to reproduce, control and tests are related to few hours of continues operation. Thermoelectric materials have low efficiencies therefore new alloys are necessary to improve the figure of merit (ZT).


[3]European Commission, Forward Looking Workshop on Materials for Emerging Energy Technologies, 2012.
[4]Gust et al., Solar fuels via artificial photosynthesis, Accounts of Chemical Research 2009, 42(12), pp 1890-1898.
[5]H. Alama, S.Ramakrishna, A review on the enhancement of figure of merit from bulk to nano-thermoelectric materials, Nano Energy 2013, 2(2), pp 190-212.
[7]I. Tachibana et al., Artificial photosynthesis for solar water-splitting, Nature Photonics 2012, 6(8), pp 511-518.
[9]J.Holterman and  P. Groen, An Introduction to Piezoelectric Materials and Applications, Stichting Applied Piezo, 2013.
[10]K. Uchino, Advanced Piezoelectric Materials: Science and Technology,second edition, Woodhead Publishing, 2017.
[11]J. R. Pickens, D.J. Nagel, The status of low energy nuclear reactions technology, 2016,
[13] D.J. Nagel, Evidence of Operability and Utility from Low Energy Nuclear Reaction Experiments, 2017, NUCAT Energy LLC.
[15] E-Cat Australia Pty Ltd, E-CAT-a paradigm shift in green energy production,
[16]G.J.Snyder and E.S.Toberer, Complex thermoelectric materials, Nature materials 2008, 7, pp 105-114.
[17]X. Zhang, L-D.Zhao, Thermoelectric materials: energy conversion between heat and electricity, Journal of Materiomics2015, 1(2), pp 92-105.

Carbon Dioxide Recycling

Author: Vincenzo Piemonte, Associate Professor, University UCBM – Rome (Italy)

1.Theme description

Nowadays, CO2 recycling is one of the possible contributions to CO2 mitigation and an opportunity to use a low-cost (or even negative-cost, when considering taxes on emissions) carbon source.

CO2 recycling introduces a shorter path (in terms of time) to close the carbon cycle compared to natural cycles and/or an additional way to store CO2 in materials with a long life-time; in addition, it is a way to store renewable energy sources and/or use an alternative carbon source to fossil fuels. Moreover, CO2 recycling produces valuable products that can be marketed and thus add economic incentives to the reduction of CO2 emissions, while options such as storage only add costs. Carbon capture and recycling (CCR) avoids also the costs associated with transporting CO2.

Recycling of CO2 is therefore a possible contributor, together with other technologies, to a solution for the global issue of GHG emissions, but has only started to be considered in detail in recent years.

The lifetime of the products of CO2 conversion is another important aspect (see figure 1). The IPCC report on CO2 capture and storage[1] selected as a crucial parameter the time lapse between the moment of CO2 conversion into a product and CO2 release back into the atmosphere. A long lifetime of the CO2-based product will fix the molecule for a long time, thus preventing its (re-)release into the atmosphere. Most product lifetimes range between several months and a few years, with the exception of inorganic carbonates and polymers based on organic carbonates that store CO2 from decades to centuries.

CCR can be also viewed as a way to introduce renewable energy into the chemical and energy chain[2], by storing solar, geothermal, wind, or other energies in chemical form. The resulting chemical facilitates storage and transport of energy, and is particularly important if it is compatible with the existing energy infrastructure and/or can be easily integrated into the existing chemical chain. Therefore, recycling CO2 is an opportunity to limit the use and drawbacks of fossil fuels, while avoiding the high costs (including energy) associated with a change in the current energy and chemical chain. In considering CO2 recycling, the effect is thus not only direct, that is, subtraction of CO2 from emissions, but a combination of direct and indirect effects that amplifies the impact.  Finally, CO2 finds utilization when there is a profitable cost/benefit trade-off linked to CO2 (re)using in place of the existing technology, regardless of any considerations linked to capture and storage policies.

In the following, the emerging large-scale CO2 conversion routes will be shortly analysed.    
Figure 1 - Summary of the different options for CO2 the valorization[3].

Notes: Necessary timeframe for development: 1 More than 10 years → 4 Industrial; Economic Perspectives: 1 Difficult to estimate→4 Available industrial data; External use of energy: 1 Difficult to decrease→4 No need; Volume CO2 (potential): 1 Less than 10 Mt→4 More than 500 Mt; Time of sequestration: 1 Very short→4 Long term; Undesirable impacts on environment (utilization of solvents, utilization or production of toxic or metallic compounds, utilization of scarce resources): 1 Significant→4 Low.


2.Non Biological Route

The CO2 recycling by non biological route can be divided in three different sub-routes, that is Inorganic reactions, Organic reactions and Syngas production with further conversions.

2.1 Inorganic Reactions

Mineral carbonation, that is, the formation of carbonate from naturally occurring minerals such as silicate-rich olivine and serpentine, is an already well-recognized carbon storage option[4].

Calcium carbonate is a key product, for example of the Solvay process for production of Na2CO3 and NaHCO3, and can be mined as limestone. An extensive market exists also for synthetic or precipitated calcium carbonate for applications in the paper industry, plastics, rubber and paint products, with an estimated global market of more than 15 Mt a−1 [5]

One of the most promising process devoted to convert CO2 from flue gases in bicarbonate is the Skyonic’s patented CO2 mineralization process Skymine[6],  the first for-profit system converting flue gas CO2 into bicarbonate (baking soda) as main commercial product.  25 million US$ have been financed by the US Department of Energy (DoE) in 2010, to support the industrialization of this carbon capture technology that can be retrofitted to existing plant infrastructures.

Another project (Calera project) has also been selected in the same 2010 funding act (DoE share 20 million US$), and focuses on the production of  mineral end-products as building materials, such as carbonate-containing aggregates or supplementary cement-like materials. Inspired by the biogenesis of coral reef, the heart of the technology coarsely consisted of precipitating captured CO2 as novel (meta)-stable carbonates and bicarbonates with magnesium- and calcium-rich brines; the CO2 would originate from captured flue gas—from fuel combustion or other large plants—and the brine from seawater or alkaline industrial waste sources[7] formula 1   formula 2    

2.2 Organic Reactions

The synthetic routes from CO2 to organic compounds that contain three or more carbon atoms number in the tens, as extensively reviewed[8],[9],[10],  but only five are earmarked as industrialized. Figure2 is an overview of some of the possible organic chemicals produced from CO2. Among these one, the most important are Urea, Acrylates, Lactones, carboxylic acids, Isocyanates, Polycarbonate via monomeric cyclic carbonate, Alternating polyolefin carbonate polymers, Polyhydroxyalkanoate, Polyether carbonate polyols and Chlorinated polypropylene.


Figure 2 - A summary of organic chemicals produced from CO2[11].

2.3 Syngas formation and further conversion

The chemical reduction of thermodynamically stable CO2 to low-molecular-weight organic chemicals requires high-chemical-potential reducing agents such as H2, CH4, electrons, and others. The hydrogenation of CO2 can be connected to the well-established portfolio of chemicals synthesized from syngas (CO/H2) via the reverse water–gas shift (RWGS) reaction, where methanol, formic acid, and hydrocarbons emerge as the three main products of interest (see figure 3).

Methanol is one of the chemicals with the largest potential to convert very large volumes of CO2 into a valuable feedstock. It is already a commodity chemical, manufactured on a large scale (40 Mt in 2007)[12] mainly as a feedstock for the chemical industry towards chemicals such as formaldehyde, methyl tert-butyl ether (MTBE), and acetic acid, which makes CH3OH a preferable alternative to the Fischer–Tropsch (FT) reaction, due to the broader range of chemicals/products, and hence their application fields as well as higher productivity.

An alternate source of reducing hydrogen can be methane. The complete hydrogenation of CO2 to methane is the Sabatier reaction:

In terms of hydrogen consumption, and hence overall energetics, CO2 reduction to methanol rather than to methane might appear favorable given the better ratio by energy value of the product relative to the starting H2; nevertheless, specific conditions (for example, the need to produce substituted natural gas; SNG), know-how, and other local conditions have spurred industrial applications of the Sabatier reaction.

formula 3     fig.3  
Figure 3 - CO2 routes to chemistry and energy products via syngas

3.Biological route

Photosynthesis is the largest-scale CO2 conversion process, since it is present in all plants and photosynthetic micro-organisms (including microalgae and cyanobacteria).

In terms of CO2 consumption, a total of 1.8 tons of CO2 is needed to produce 1 ton of algal biomass[13]. Microalgae need also nitrogen and phosphorus nutrients. The integration of chemicals and energy production in large scale industrial algal biofarms has led to the “algal biorefinery” concept[14]. The chemical products of the biorefinery include carbohydrate and protein extracts, fine organic chemicals (e.g., carotenoids, chlorophyll, fatty acids) for food supplements and nutrients, pharmaceuticals, pigments, cosmetics, and others, along with energy fuels, for example, biodiesel, bioethanol, and biomethane. The biochemical conversion finalized exclusively to energy (e.g., anaerobic digestion, alcoholic fermentation, photobiological hydrogen production) has recently been reviewed by Brennan and Owende[15].

Thus, even if current stage of development in algal carbon capture at large emitter sites indicates an economic cost that is still too high, there are signals of a fast scientific and technological development in this area, including improvements in:

  • photobioreactor design (e.g., surface area, light path, layer thickness)[16];
  • harvesting and processing technologies, including substantial simplification due to self-excreting algae[17];
  • photosynthetic efficiency, productivity, compatibility with concentrated CO2 streams, and tuning to desired end-product by genetic engineering[18]

Another interesting technology is “The power-to-gas technology” which is being explored mainly with a focus to store renewable energy, and project developers so far tend to use CO2 from biogas as carbon source for methanation and hydrogen may also be directly mixed with biogas (see figure 4). Although these plants might provide very useful insights into the options of CO2 capture, methanation, and hydrogen storage, biogas as a carbon source may prove only sustainable if derived from (wet) waste and sewage[19].

In the same field is active the INPEX society with an interesting research which involves injecting CO2 into the ground by using CCS or CO2 Enhanced Oil Recovery (EOR) for the purpose of producing methane by microbes that live in oil and gas fields and water bearing strata (see figure 5). A constant supply of hydrogen is vital to microbes survival. INPEX has performed indoor experiments that use the power of electrochemical hydrogen reduction stage. The research has confirmed electrochemical methane production activation by microbes, including the microbes that lives in oil field in Japan[20].

Figure 4 - Power-to-gas technology scheme
Figure 5 - INPEX project

To conclude, optimistically, assuming that all the options for CO2 utilization can be fully implemented and considering that the use of CO2 as carbon source partly prevents the use of fossil fuels and incorporates renewable energy into the chemicals and energy chain (and thus has a more widespread impact than only on GHG emissions), a potential reduction equivalent of 250–350 Mt a−1 can be estimated in the short- to medium-term. This amount represents about 10 % of the total reduction required globally, that is, it is comparable to the expected impact of carbon capture and storage technologies, but with additional benefit in terms of (i) fossil fuel savings; (ii) additional energy savings; (iii) accelerating the introduction of renewable energy into the chemicals and energy chain.

[1] IPCC Special Reports: Carbon Dioxide Capture and Storage (Eds.: B.Metz, O. Davidson, H. de Coninck, M. Loos, L. Meyer), Cambridge UniversityvPress, Cambrige 2006.
[2] G. Centi, S. Perathoner, Greenhouse Gases Sci. Technol. 2011, 1, 21– 35.
[3] N. Thybaud, D. Lebain, Panorama des voies de valorisation du CO2, l’Agence de L’Environnement et de La Matrise de L’Energie, ALCIMED, 2010.
[4] W. Seifritz, Nature 1990, 345, 486 –486.
[5] Roskill Information Services, 2008. See reports.html.
[6] J. D. Jones, D. St. Angelo, WO200939445, 2009.
[7] D. Biello, Sci. Am., August 7, 2008. See: article.cfm?id=cement-from-carbon-dioxide.
[8] Carbon Dioxide as Chemical Feedstock (Ed.: M. Aresta), Wiley-VCH, Weinheim 2010.
[9] T. Sakakura, J.-C. Choi, H. Yasuda, Chem. Rev. 2007, 107, 2365 –2387.
[10] A. Decortes, A. M. Castilla, A. W. Kleij, Angew. Chem. 2010, 122, 10016 – 10032; Angew. Chem. Int. Ed. 2010, 49, 9822 –9837.
[11] Y. Zhang, S. N. Riduan, Dalton Trans. 2010, 39, 3347- 3357.
[12] G. A. Olah, A. Goeppert, G. K. Surya Prakash, Beyond Oil and Gas: The Methanol Economy, 2nd Edition, Wiley-VCH, Weinheim 2009.
[13] A. M. J. Kliphuis, L. de Winter, C. Vejrazka, D. E. Martens, M. Janssen, R. H. Wijffels, Biotechnol. Prog. 2010, 26, 687–696.
[14] Biorefineries: Adding Value to the Sustainable Utilization of Biomass, International Energy Agency, Paris 2009.
[15] L. Brennan, P. Owende, Renewable Sustainable Energy Rev. 2010, 14, 557 –577.
[16] O. Pulz, Appl. Microbiol. Biotechnol. 2001, 57, 287–293.
[17] N. T. Eriksen, Biotechnol. Lett. 2008, 30, 1525 – 1536.
[18] N. Eriksen, Appl. Microbiol. Biotechnol. 2008, 80, 1 –14.
[19] Carbon Recycling for Renewable Materials and Energy Supply – Recent Trends, Long-Term Options, and Challenges for Research and Development, Journal of Industrial Ecology 2014, Vol. 18, Issue 3, 327-340

Water Treatment and Reuse with Electrocoagulation  in the Oil & Gas Industry

Author: Mauro Capocelli, Researcher, University UCBM – Rome (Italy)

1.Theme description

Electrocoagulation (EC) combines conventional treatment as coagulation and flotation with electrochemistry. The process destabilizes soluble organic pollutants and emulsified oils from aqueous media by introducing highly charged species that neutralize the electrostatic charges on particles and oil/emulsions droplets to facilitate agglomeration/coagulation (and the following separation from the aqueous phase). In comparison with conventional coagulation processes, the smallest charged particles have a greater probability of being coagulated because of the electric field that sets them in motion. Moreover, an “electrocogulated” flock tends to contain less bound water, is more shear resistant and is more readily filterable[1].

EC has been known since 1909 (Aluminium/ iron-based electrocoagulation patent by A.E. Dietrich)[2]. Is has been (most commonly) used in the oil & gas, construction, and mining industries to separate emulsified oil, petroleum hydrocarbons, suspended solids, heavy metals from effluents. Particularly in the Oil & Gas sector, the EC is fundamental to treat and reuse (on-site) the water needed for the drilling and fracking processes, minimizing the impact of injection wells. The application market has not yet exploded due to the high costs but changes in regulations and growth in the cited industrial sectors has recently brought electrocoagulation to the forefront[3].


2.Principles of Electrocoagulation

Basically, the electrocoagulation apparatus consists of a sacrificial anode, producing coagulant metal ions, and a cathode made of metal plates (both submerged in the aqueous solution). The electrodes are usually made of cheap and non-toxic metals such as aluminium and iron. The dissolution (in mass), according to the Faraday’s law, is proportional to the applied current I and the treatment time ts. formula

Where z is the valence of ions of the electrode material, M is molar mass of the metal and F is Faraday’s constant (96485 C/mol). Coagulation is brought about by the reduction of the net surface charge; the colloidal particles (previously stabilized by electrostatic repulsion) can approach closely enough for Van Der Waals forces to aggregate. The reduction of the surface charge is a consequence of the decrease of the repulsive potential of the electrical double layer by the presence of an electrolyte having opposite charge (Fig. 1).



Fig.  1. - Conceptual representation of the electrical double layer in colloidal particles[4]

The classical representation of EC dissolution with the induced separation mechanisms (coagulation, flocculation and flotation) is reported in Fig. 2. The following main reactions take place during EC.


The metals and other contaminants, suspended solids and emulsified oils are entrained within the floc because of the neutralization of surface charges (destabilization). Destabilization also occurs by “sweep flocculation”, where impurities are trapped and removed in the amorphous hydroxide precipitate produced. Microbubbles (mainly of H2 and O2) adhere to agglomerates helping to separate and lift the flocs up to the surface. Depending on the application, the final solids separation step can be done using settling tanks, media filtration, ultrafiltration, and other methods.



Fig.  2 -  Schematic representation of typical reactions during the EC treatment

Ferrous iron may be oxidized to Fe3+ by oxygen or anode oxidation and the formation of active chlorine species can enhance the performances of the EC. Both Fe and Al ions complexes with OH ions. The formation of these complexes depends strongly on the pH of the solution, as shown in Fig. 3: above pH 9, Al(OH)4− and Fe(OH)4− are the dominant species. Anions, such as sulphate or fluoride, affect the composition of hydroxides because they can participate to side reactions and replace hydroxide ions in the precipitates. Temperature affects floc formation, reaction rates and conductivity. The pollutants’ concentration affects the removal efficiency because coagulation follows pseudo second or first-order kinetics. In fact, Ezechi et al., showed a second order kinetic of boron adsorption onto Fe(OH)3 in EC. This work reported a removal efficiency of almost 97% using iron plate electrode (inter-electrode distance of 0.5 cm, 15 mg/l concentration of boron in produced water, pH 7.84, current density of 12.5 mA/cm2) .



Fig.  3 - Concentrations of soluble monomeric hydrolysis products of Fe(III) and Al(III) at 25°C[5]

This application does not work properly in case of low conductivity (i.e. less than 300 μS/cm), low suspended solids (turbidity less than 25 NTU or TSS less than 20 mg/L), non polar and monovalent contaminates (aqueous salts of Na, K, Cl, F, etc.), non polar and charged particles.


3.Produced water treatment & reuse

The literature reports many application of EC to water treatment & reuse. Among them, the treatment of oily waste water and produced water is relevant for the Oil & Gas sector. Produced water (PW) is the water trapped in the reservoir rock subsists under high pressures and temperatures and brought up along with oil or gas during production. Other components are the salts in relation to the source (seawater and groundwater) as well as dispersed hydrocarbons, dissolved hydrocarbons, dissolved gases (such as H2S and CO2), bacteria and other organisms, and dispersed solid particles. PW may also include chemical additives (corrosion inhibitors, oxygen scavengers, scale inhibitors, as emulsion breakers and clarifiers, flocculants and solvents) used in pre-treating, in drilling and generally in producing operations as well as in the downstream oil/water separation process. These chemicals affect the oil/water partition coefficient, toxicity, bioavailability, and biodegradability.[6]

PW is considered an industrial waste and its disposal to surface waters or its evaporation in ponds is subject to stringent environmental regulations.  It should be treated and reinjected for pressure maintenance, replacing aquifer water or should be reused for irrigation or as industrial process water. Many companies propose their own EC systems (Watertectonics, F&T Water Solutions, Bosque Systems, etc.) for the treatment of PW. The conference proceedings of IDA [7] gives some interesting examples of EC pretreatment for water reuse in the Oil and Gas Industry. A thypical process scheme, taken from a pilot plant presented in the conference7, is reported in Fig. 4.


Fig. 4 - Process scheme of a water reuse treatment plant (EC/UF/CE/RO/UV) and detail of the RO process scheme

Eames reports the case study of the Oil Field in Colombia Meta Province is provided with EC/DAF/UF/RO for the wastewater reuse (3,000 BPD of water for Agricultural Irrigation and Surface; Irrigation (<60 ppm sodium) with the characteristics in Table below. Piemonte et al. also proposed the process analysis with energy and material balances of a produced water treatment train including Vibratory Shear Enhanced Processing (VSEP) membrane system (secondary treatment) and RO destined to the tertiary treatment to achieve the quality needed for water reuse[8].

[1] IDA
[2] Kuokkanen et al., Recent Applications of Electrocoagulation in Treatment of Water and Wastewater—A Review. Green and Sustainable Chemistry, 2013, 3, 89-121
[4] Mikko Vepsäläinen. PhD Thesis. Electrocoagulation in the treatment of industrial waters and wastewaters. VTT SCIENCE 19 JULKAISIJA – UTGIVARE – PUBLISHER (2012)
[5] Mikko Vepsäläinen. PhD Thesis. Electrocoagulation in the treatment of industrial waters and wastewaters. VTT SCIENCE 19 JULKAISIJA – UTGIVARE – PUBLISHER (2012)
[7] IDA (2013) Water Recycling and Desalination in the Oil & Gas Industry. Proceeds to Benefit Water-related Humanitarian Projects.
[8] Piemonte et al., Reverse osmosis membranes for treatment of produced water: a process analysis. Desalination and Water Treatment 55, 3, 2015. Reverse osmosis membranes for treatment of produced water: a process analysis

Process and Catalyst Innovations in Hydrocracking to Maximize High Quality Distillate Fuel

Author: Vincenzo Piemonte, Associate Professor, University UCBM – Rome (Italy)

1.Theme description

Worldwide economic growth continues to drive demand for transportation fuels, and in part

There are several processes presently able to meet individual refinery needs and project objectives[2]. In particular, UOP LLC Company is one of the most active society in this field[3]. The basic flow schemes considered by UOP are single-stage or two-stage design. UOP two-stage Unicracking process flow schemes can be a separate hydrotreat or a two-stage process as shown in Figure 1. In the separate hydrotreat flow scheme the first stage provides only hydrotreating while in the two-stage process the first stage provides hydrotreating and partial conversion of the feed. The second-stage provides the remaining conversion of recycled oil so that overall high conversion from the unit is achieved. These flow schemes offer several advantages in processing heavier and highly contaminated feeds. Two-stage flow schemes are economical when the throughput of the unit is relatively high.

The design of hydrocracking catalyst changes depending upon the type of flow scheme employed. The hydrocracking catalyst needs to function within the reaction environment and severity created by the flow scheme that is chosen.


2.Enhanced Hydrocracking Processes

During the early years of hydrocracking, refiners were mainly interested in maximizing production of naphtha for reforming to high octane gasoline. However with advancements in hydrocracking catalyst technology, and the demand for maximizing distillate yields from heavier feedstocks, two-stage design offers a cost-effective option for a larger capacity maximum distillate unit operation.

A major difference between the first and second stage hydrocracking reactor reaction environments lies in the very low concentrations of ammonia and hydrogen sulfide in the second-stage (see figure 2). The first-stage reaction environment is rich in both ammonia and hydrogen sulfide generated by hydrodenitrogenation and hydrodesulfurization of the feed. This significantly impacts reaction rates, particularly cracking reaction rates, leading to different product selectivity and catalyst activity between the two-stages. The catalyst system can be optimized to obtain a highly distillate selective overall yield structure. Optimum severity can be set for each stage to achieve catalyst life target with minimum catalyst volume. Overall, the two-stage design allows optimization of conversion severity between the two stages, maximizing overall distillate selectivity. New advances in the two-stage Unicracking process design include several innovations in each reaction section of the design. The pretreating section uses a high activity pretreating catalyst that allows hydrotreating at a higher severity, providing good quality feed for the first-stage hydrocracking section and enabling maximum first-stage selectivity to high quality distillate. The second-stage is optimized by use of second-stage hydrocracking catalyst that is specifically designed to take advantage of the cleaner reaction environment. The second-stage catalyst is designed so that the cracking and metal functions are balanced. At the same time the second-stage hydrocracking severity is optimized so that maximum distillate selectivity is obtained from the second-stage of hydrocracking.

Figure 1 - Two-stage Unicracking Process Flow Schemes.
Figure 2 - Two-stage Unicracking Process Flow Schemes.

3.Catalyst Development

Designing catalysts which can be successfully used for processing heavy feeds requires an understanding of the interactions of many factors. Detailed knowledge is increasingly important for controlling reaction pathways to achieve specific product types to meet today’s market demands. The key considerations for optimal catalyst design require good understanding of the molecular transformations of feed to product with respect to catalyst functions and process variables.

Such consideration involves process severity and its impact on the extent of secondary cracking in the hydrocracking reactor. The key steps in the mechanism of hydrocracking paraffins consists of a sequence of steps beginning with dehydrogenation at metal sites to form olefinic intermediates which are then protonated at the acid sites to form the reactive carbenium ions. These, in turn, can isomerize and leave the catalyst surface without cracking after picking up a hydride ion at the metal sites. Alternatively, they can crack to form smaller alkanes which then leave the catalyst surface as hydrocracked products[4]. This process of isomerization and cracking to primary cracked products is referred to as “ideal cracking” and therefore it does not involve secondary cracking of the initially formed product. Secondary cracking often results in the formation of light ends which are of low value to a unit operating to make liquid transportation fuels.

Control of this sequence of steps to stop the reactions after formation of primary products is accomplished by careful selection of catalyst properties such as the strength and distribution of acid sites and tailoring the hydrogenation function to fit the acidity on the catalyst. In addition, particularly when heavy feedstocks are being processed, elimination of diffusion constraints which contribute to secondary cracking is accomplished by strict control of pore size and pore geometry of the catalyst to match the molecular dimensions of a given feed. These catalyst properties must also be matched to the service environment in which the catalyst is intended to function, including the recycle gas composition and the reactor pressure. Thus, detailed knowledge of molecular types and size in the feed is incorporated into catalyst selection criteria in order to make critical determination of the appropriate catalytic components to match feed for a given unit.

Figure 3 - Second-Stage Unicracking Catalyst Design

Hydrocracking catalysts are typically dual function catalysts, containing an acid-function for  cracking and a metal-function for hydrogenation. As shown in Figure 3, a good hydrocracking catalyst, amorphous or zeolitic, is designed to balance these two functions for optimum performance. In the figure two arrows indicate the type of functions (acid and metal) and the height of the arrows indicates the strength of the individual functions. A catalyst with proper balance of these two functions performs optimally in terms of desired product selectivity and catalyst temperature activity/stability. However if a catalyst, designed for the first-stage sour reaction environment typical of first-stage operation, is put in the cleaner reaction environment of the second-stage, a significant boost in the cracking function is observed while the performance of the metal-function remains basically unchanged. Thus, the catalyst that was in good balance for the first-stage environment becomes unbalanced for the second-stage environment resulting in sub-optimal performance. This difference is exacerbated, as the temperature required to achieve desired conversion is reduced. On the other hand the reduction in temperature reduces the metal functionality thus reducing hydrogenation. Therefore, for ideal second-stage catalyst, it is desired that the acidity of the cracking material is weak with a stronger metal-function so that even though the catalyst may appear imbalanced for the first stage sour environment it will be in balance in the second-stage reaction environment. Applying this design approach, UOP recently developed a new second-stage catalyst achieving higher distillate selectivity than the current UOP standard design.

Enhanced two-stage performance is achieved by optimized first- and second-stage conversion severity and application of the new second-stage catalyst. This results in significantly improved overall C5+ yields and a product slate which is more selective to a high quality heavy diesel product.

The enhanced two-stage design has improved distillate selectivity and the product slate is diesel selective with lower light-end production resulting in 7-10% lower hydrogen consumption. The product qualities are similar or better. The improved performance is achieved by optimum processing severity and use of new second-stage hydrocracking catalyst.

[1] Purvin & Gertz Inc., “Global Petroleum Market Outlook: Prices And Margins”, Fourth Quarter 2007 Update
[2] Thakkar V. P. et al, “Innovative Hydrocracking Applications For Conversion of Heavy Feedstocks”, AM-07-47 NPRA 2007 Annual Meeting
[3] Remsberg, Charles and Higdon, Hal, “Ideas for Rent The UOP Story”, p. 326, 1994
[4] Coonradt, H. L. and Garwood, W.E., Mechanism of Hydrocracking Reactions of Paraffins and Olefins, Ind. Eng. Chem. Process Des. Dev., 3 (1964) 38

Waste to Fuel Technologies

Author: Mauro Capocelli, Researcher, University UCBM – Rome (Italy)

1.Theme description

The growing concerns about climate change as well as the management of ever increasing liquid and solid wasters highly pushed the R&D in waste-to-fuel conversion[1]. The transformation of wastes into fuels can be realized by the different processes represented in Fig. 1 (extending the classification of 2nd generation of biofuels). The direct incineration of waste enables the highest recovery of the energy content from the thermodynamic point of view. On the other hand, depending on the composition, the emissions of the combustion process can be characterized by the presence of pollutants such as HCl, HF, NOx, SO2, VOCs, PCDD/F, PCBs and heavy metals[2].

Fig. 1 -  Waste-to-Fuel Conversion technologies

Besides incineration, other thermochemical processes (see Fig.1) such as pyrolysis, gasification and plasma-based technologies, have been developed to selected waste streams. In general, thermal treatments of biomasses (and wastes) allow to get a wide spectrum of fuels (gaseous, liquid and solid) and many chemicals as co-product; the specific treatment is chosen according to the final fuel & chemicals products[3]. Many companies are using municipal solid waste (MSW) thermochemical conversion methods: Hitachi Metals Environmental Systems, Ebara/Alstom, Enerkem, Foster Wheeler, Nippon Steel, PKA, SVZ, etc. The first industrial-scale MSW to biofuel facility opened in Edmonton on 2014 by Enerkem converts 100000 t/year of municipal waste into chemicals and biofuels and is able to divert 90% of the residential waste from landfills[4].

The multiple synthetic conversion routes of major biofuels produced (Biofuel Flow) from first and second-generation biomass feedstock is represented in Fig 2. Conversion through biochemical and physiochemical processes is playing and important role in the recent biorefineries. These, following the  paradigm of zero-waste and zero-emission, allow the extraction of valuable substances processing biomass into a spectrum of marketable products and energy and are expected to play a fundamental role in the future low carbon economy[5]. Moreover, biorefineries would be very attractive from an employment creation perspective, resulting in significantly more jobs per unit of biomass feedstock than conventional processes[6]. A brief review of the processes and technologies cited in Fig. 1 is given in the following.



Fig.  2 - Biofuel flow[7]

2.Thermochemical conversion

The pyrolysis occurs without oxygen at atmospheric pressures in a temperature range of 250-900°C. Generally, high vapour residence time favours char production (at lower process temperature) and gas yield (higher temperatures), whereas moderate and short vapour residence times favour the liquid production. In fast pyrolysis, the heating occurs at a moderate temperature (400-550 °C) with very high heating velocities (100°C/s). A successive rapid quenching is required to condense the vapors, to minimize secondary reactions and coalescence or agglomeration (aerosols formation). The heat duty can be recovered from the combustion of part of the produced syngas. The liquefaction by pyrolysis of solid wastes has been widely reviewed in the last years, due to the increasing interest in integrated technologies to derive fuels and chemicals from solid wastes.[8] A review on process conditions for optimum bio-oil yield in hydrothermal liquefaction of biomass is given by Akhtar and Amin[9].

Municipal plastic wastes, through the cracking and pyrolysis, can produces bio-oil of a good quality, a valid option to plastic recycling or direct combustion[10]. For example, Sharma et al. 2014 repots a study of high-density polyethylene grocery bags pyrolysis to produce alternative diesel fuels or blend components for the petroleum diesel (saturated aliphatic paraffins) of very good quality (with cetane number and lubricity). Many examples of pyrolysis plants are located in Japan. Mogami Kiko owns a Pyrolysis plant (Capacity of 200 kg/h) that produces 80-100 Nm3/h of gas with LHV of 5000-6000 kcal/Nm3 30-40 kg/h of tar and 20-30 kg/h of char, processing several kinds of plastic in a rotary kiln. Environment System have implemented the pyrolysis of thermoplastics waste (without chlorine) in a tank reactor with continuous feeding of scrap film (extruder). Toshiba implemented the continuous feeding of thermoplastics waste (no chlorine, 40 tons/day) into a rotary kiln (externally heated) producing liquid and gaseous hydrocarbons and 4MW cogeneration. Samshiro et al. described the fuel oil production from MSW in sequential pyrolysis and catalytic reforming reactors[11]. Wong et al. [12], report alternative solutions to solid waste pyrolysis as fluidized bed and supercritical water. Although microwave-assisted pyrolysis is another possible solution to the problem, especially in the treatment of commingled plastic waste, this relatively new concept requires more feasibility studies.

Gasification, operating at high temperatures (>700 °C) without combustion results into solid and gaseous products.. Although associated with lower power production and higher complexity, the gasification of solid wastes can count about a hundred of operating plants having a capacity in the range 10–250∙103 t/y and represents a valid alternative in the field of waste management[13]. Moreover, gasification-based technologies enable the reduction of waste amount to disposal in comparison to the conventional combustion-based WtE units and allows alternative strategies for the syngas utilization[14]. Therefore, gasification of waste has been exploited as alternative to combustion for the waste to energy (WtE) processes in order to improve the performances and the distributed WtE policy.

By using multiple high-temperature processes, including the breaking down of organics through plasma arcs, enables the production of a mixture of hydrogen and carbon monoxide. In this way, metals and other inorganic materials in garbage can be isolated and recycled; the combination of high temperatures and an oxygen-poor environment prevents the production of dioxins and furans; eventually the syngas can either be directly burned in gas turbines to produce electricity, or it can be converted into other fuels, including gasoline and ethanol. Enea reported several experimental campaign conducted at lab and pilot-scale devices[15]. Molino et al. investigated the steam gasification of scrap tires as a sustainable and cost-effective alternative to tire landfill disposal; steam activation of the char derived from the tire residues of the gasification process was carried out at constant temperature and feeding ratio between gasifying agent and char, using different activation times (180 and 300 min)[16].


3. Physicochemical conversion

These methods are based on the separation of useful chemical compounds with physicochemical extraction such as cold press extraction, supercritical fluid extraction, and microwave extraction.  In the recent years, cavitation assisted (e.g. ultrasound assisted) extraction process has been utilized for the biomass pretreatment, delignification and hydrolysis, extraction of oil, fermentation and synthesis of bioalcohol[17]. Transesterification of plant or algal oil is a standardized process by which triglycerides are reacted with methanol in the presence of a catalyst to deliver fatty acid methyl esters (FAME) and glycerol. The extracted vegetable oils or animal fats are esters of saturated and unsaturated monocarboxylic acids with the trihydric alcohol glyceride (triglycerides) which can react with alcohol in the presence of a catalyst, a process known as transesterification (according to the following simplified scheme of reactions).


The simplified process scheme is given in Fig. 3. From an economic point of view, the production of biodiesel has proven to be very feedstock-sensitive. Leung et al., report a review on biodiesel production using catalysed transesterification[18]. Waste vegetable oil (WVO) can also be converted after refinement. It has a low sulphur content and it is not associated to change in the land use. The utilization of waste cooking oils is explained in details in the review of Kulkarni et al.[19]


Fig.  3­ - Simplified process flow chart of alkali-catalyzed biodiesel production.


4.Biochemical conversion

 In general, the conversion of biodegradable waste or energy crops, through anaerobic digestion, produces a gaseous fuel called biogas (mainly methane and carbon dioxide). In similarity, the wastes in landfill generates gases (landfill gases, LFG) that can represent a source of renewable energy. Some examples of commercial conversion processes (typically run via anaerobic digestion or fermentation by anaerobes) are reported in the table below (extracted from).



Microbial hydrogen production using anaerobic fermentative bacteria is considered a cost effective technology because the process can use waste materials or wastewaters. The biological production pathway of hydrogen and methane (by microorganisms) can be divided into two main categories: by photosynthetic bacteria under anaerobic or semi-anaerobic light conditions, and by chemotrophic anaerobic bacteria[20]. During the process, organic matter is converted to volatile fatty acids through hydrolysis and acidogenesis (acidogenic fermentation or dark fermentation). This latter produces fuel gas with higher rates. Hydrogen yields from various crop substrates is reported by Mei Guo et al.[21] Kurniawan et al. reported a study on acid fermentation combined with post-denitrification for the treatment of primary sludge[22].

Since 1980 US Department of Energy supported the Aquatic Species Program (ASP) to exploit algae as fuels (mainly oil from microalgae). The ASP firstly worked on growing algae in open ponds and on studying the the impacts of different nutrient and CO2 concentrations. The program ended in 1995 due to financial issues. In recent years, the energy security risks and the advancements in biotechnology (the ability to genetically engineer algae to produce more oils and convert solar energy more efficiently), has rebirth the R&D in this field[23]. Although the issue of low oil productivity per acre, the cultivation of oleaginous microorganisms (microalgae) can contribute to the biofuel production and to the mitigation of carbon emissions. In this files, further improvements are also needed in the downstream processes and the light supply systems.

[1] Piemonte, V., Capocelli, M., Orticello, G., Di Paola, L., 2016 Bio-oil production and upgrading in book: Membrane Technologies for Biorefining, pp.263-287
[2] Bosmans, A., et al., The crucial role of Waste-to-Energy technologies in enhanced landfill mining: a technology
review, Journal of Cleaner Production (2012), doi:10.1016/j.jclepro.2012.05.032.
Barba D, Capocelli M, Luberti M, Zizza A. Process analysis of an industrial waste-to-energy plant: theory and experiments. Process Safe Environ 2015;96:61–73.
[3] Mckendry, P., 2002. Energy Production from Biomass (Part 2): Conversion Technologies. Bioresource Technology 83, 47-54.
[5] Industrial Biorefineries and White Biotechnology.
Copyright © 2015 Elsevier B.V.
[6] Patricia  Thornley*,  Katie  Chong,  Tony  Bridgwater.  European  biorefineries:  Implications  for  land,  trade
and  employment. Environmental Science & Policy 37  (2014) 255 –265
[7] D.King et al., The future of industrial Biorefineries. 2010 World Economic Forum.
[8] Isahak, Wan Nor Roslam Wan, Mohamed W M Hisham, Mohd Ambar Yarmo, and Taufiq Yap Yun Hin. 2012. “A Review on Bio-Oil Production from Biomass by Using Pyrolysis Method.” Renewable and Sustainable Energy Reviews 16 (8). Elsevier: 5910–23. doi:10.1016/j.rser.2012.05.039.
Bridgwater, A.V. 2012. “Review of Fast Pyrolysis of Biomass and Product Upgrading.” Biomass and Bioenergy 38: 68–94. doi:10.1016/j.biombioe.2011.01.048.
[9] Javaid Akhtar, Nor Aishah Saidina Amin. A review on process conditions for optimum bio-oil yield in hydrothermal liquefaction of biomass. Renewable and Sustainable Energy Reviews15 (2011) 1615–1624
[10] Demirbas, Ayhan. 2004. “Pyrolysis of Municipal Plastic Wastes for Recovery of Gasoline-Range Hydrocarbons.” Journal of Analytical and Applied Pyrolysis 72 (1): 97–102. doi:10.1016/j.jaap.2004.03.001.
[11] Mochamad Syamsiro et al. / Energy Procedia 47 ( 2014 ) 180 – 188
[12] S.L. Wong et al. / Renewable and Sustainable Energy Reviews 50 (2015) 1167–1180
[13] Diego Barba, Mauro Capocelli,  Giacinto Cornacchia, Domenico A. Matera. Theoretical and experimental procedure for scaling-up RDF gasifiers: The Gibbs Gradient Method. Fuel 179 (2016),60–70.
[14] Arena U, Di Gregorio F. Element partitioning in combustion- and gasificationbased waste-to-energy units. Waste Manage 2013;33:1142–50.  Arena U, Ardolino F, Di Gregorio A. Life cycle assessment of environmental performances of two combustion- and gasification-based waste-to-energy technologies. Waste Manage 2015;41:60–74.
[15] Galvagno S, Casu S, Casciaro G, Martino M, Russo A, Portofino S. Steam gasification of refuse-derived fuel (RDF): influence of process temperature on yield and product composition. Energy Fuels 2006;20:2284–8.  Portofino S, Donatelli A, Iovane P, Innella C, Civita R, Martino M, et al. Steam gasification of waste tyre: influence of process temperature on yield and product composition. Waste Manage 2013;33:672–8.  Galvagno S, Casciaro G, Casu S, Martino M, Mingazzini C, Russo A, et al. Steam gasification of tyre waste, poplar, and refuse-derived fuel: a comparative analysis. Waste Manage 2009;29:678–89
[16] Molino et al.,  Ind. Eng. Chem. Res. 2013, 52, 12154−12160
[17] Amrita Ranjan, Ultrasound-Assisted Bioalcohol Synthesis: Review and Analysis. RSC Advances A. Ranjan, S.
Singh, R. S. Malani and V. S. Moholkar, RSC Adv., 2016, DOI: 10.1039/C6RA11580B.
[18] D.Y.C. Leung et al. / Applied Energy 87 (2010) 1083–1095
[19]Waste Cooking OilsAn Economical Source for Biodiesel: A Review. Mangesh G. Kulkarni and Ajay K. Dalai*Ind. Eng. Chem. Res., Vol. 45, No. 9, 2006 Waste Cooking OilsAn Economical Source for Biodiesel: A Review
Mangesh G. Kulkarni and Ajay K. Dalai*
[20] Cheong et al., Production of Bio-Hydrogen by Mesophilic Anaerobic Fermentation in an Acid-Phase Sequencing Batch Reactor. Biotechnology and Bioengineering, 96, 2007
[21] Mei Guo et al., Hydrogen production from agricultural waste by dark fermentation: A review. International Journal of Hydrogen Energy (2010) 1-14.
[22] Kurnawian et al., Acid Fermentation Process Combined with Post Denitrification for the Treatment of Primary Sludge and Wastewater with High Strength Nitrate. Water 2016, 8, 117; doi:10.3390/w8040117

Waste Heat Recovery in the Oil & Gas Sector

Author: Mauro Capocelli, Researcher, University UCBM – Rome (Italy)


Waste heat recovery is a process that involves capturing of heat exhausted by an existing industrial process for other heating applications, including power generation. Technavio forecasted the global waste heat recovery market in oil and gas industry to grow at a CAGR of 7.6% during the period 2014-2019[1]. The sources of waste heat mainly include discharge of hot combustion and process gases into the atmosphere (e.g. melting furnaces, cement kilns, incinerators), cooling water sand conductive, convective, and radiative losses from equipment and from heated products. To design the waste heat reclamation unit, it is necessary to characterize the stream in terms of availability, temperature, pressure and presence of contaminants such as particulate and corrosive gases. There are two main goals of recovering waste heat from industries: thermal energy recovery (both internally and outside from the plant) and electrical power generation. Fath & Hashem compared these two solution for the recovery of waste heat in an oil refinery plant located at Bagdad, Iraq[2]. For the overall energy system efficiency, it is nowadays fundamental to improve the utilization of low-temperature heat streams, primarily for thermal applications like heating, ventilation, cooling, greenhouses, etc. Oda & Hashem investigated in 1990 the selection of different strategies (air conditioning, food industry and agricultural uses) for an industrial area around including a refinery[3]. Nonetheless, also for low temperature sources, some innovations have been proposed in order to produce electricity for standalone plants and/or exploiting the resources that cannot be properly used for direct thermal applications. In the following, all these aspects are faced and some from the most recent and interest development in the R&D are reported.

Fig.1 - Estimated U.S. Energy use in 2012.

 2. Thermal energy

Electrical EnergyTraditionally, waste heat of low temperature range (0-120 °C) cannot profitably implemented for electricity generation because of the low Carnot efficiency (typically ending up with 5-7% net electricity). In the field of thermal energy direct utilization, two main options are available: waste heat recycling within the process (Fig. 2) or recovering within the plant or industrial complex. fig2
Fig 2 - Rotary regenerator on a Melting Furnace

The main utilizations in the industrial systems are the preheating of combustion air and load or the steam generation. Transfer to liquid or gaseous process streams is also common in petroleum refineries where the operation (distillation, thermal cracking…) requires large amounts of energy that can be recovered from exothermic reactions or hot process streams in integrated systems.

Doheim et al.[4]described the integration of rotating regenerative heat exchangers in 4 refining processes (two crude distillation units, a vacuum distillation unit, and a platforming unit) in order to reduce the current losses (25 to 62% of total heat input) to the values of 9.9 to 37.3%. At the low temperature (<200° C), the best uses are the regenerative (recuperative) heating of feed-stocks (process internal reuse), district heating and LP steam generation.  District heating (or tele-heating) is a system for distributing heat generated in a centralized location for residential and commercial requirements via a network of insulated pipes (mainly or pressurized hot water and steam). In alternative, low temperature waste heat can be used for the production of bio-fuel, space heating, greenhouses and eco-industrial parks. In the industrial complexes, requiring large amount of freshwater and located near the sea, a viable alternative is that of desalinate seawater via thermal processes as Multiple Effect Distillation and Multi Stage Flash Desalination in order to obtain demineralized, potable or process water.

The generation of electricity from thermal energy should be taken into account if there are not viable options of in house utilization of additional process heat or neighbouring plants’ demand. The most commonly system involves the steam generation in a waste heat boiler linked to a steam turbine in a Rankine Cycle (RC). Industrial examples can be easily found in the literature. Steam Energy WHP from Petroleum Coke Plant, located at Port Arthur (Texas), recovers energy from three petroleum-coke calcining kilnsat temperature higher thant 500°C for producing LP steam (to use at an adjacent refinery) and 5 MW of power(saving  an estimated amount of 159,000 tons per year of CO2 emission).

Since the thermal efficiency of the conventional steam power generation becomes considerably low and uneconomical when steam temperature drops below 370 ˚C, the Organic Rankine Cycle (ORC) utilize a suitable organic fluid, characterized by higher molecular mass, a lower heat of vaporization and lower critical temperature than water[5] (silicon oil, propane, haloalkanes, isopentane, iso­butane, p­xylene, toluene, etc.).


Fig. 3.- T-s diagram of a ciclo- pentane ORC cycle

These enable the utilization of lower temperatures (if compared to the RC) and a “better”coupling (lower entropy generation) with the heat source fluid to be cooled[6]. The higher molecular mass enables compact designs, higher mass flow and higher turbine efficiencies (as high as 80­85%). However, since the cycle works at lower temperatures, the overall efficiency is only around 10­20%. As abovementioned, it is important to remember that low temperature cycles are inherently less efficient than high­temperature cycles. Jung et al., 2014, reported a techno-economical evaluation of an ORC cycle (with pure refrigerant and mixtures of R123, R134a, R245fa, isobutane, butane, pentane) to recover the heat from a liquid kerosene to be cooled down to control the vacuum distillation temperature[7]. An example of a recent successful ORC installation is at a cement plant in Bavaria (Germany) to recover waste heat from its clinker cooler (exhaust gas @ 500°C) providing the 12% of the plant’s electricity requirements and reducing the CO2 emissions by approximately 7000 tons/year. Several R&D projects[8]and commercial plants[9]are reported in the references (footnotes).An example of T-s diagram of an ORC with Cyclo-Pentane (MW 70, boiling point 49,5°C) developed by GE[10]is showed in Figure 3.Also, ElectraTherm applies proprietary ORC to generate power from low temperature heat by utilizing, as fuel in industrial boilers, the natural gas that would otherwise be flared[11].

The Kalina cycle(KC)utilizes a mixture of ammonia and water as the working fluid (with a variable temperature during evaporation). It was invented in the 1980s and the first power plant (6.5 MW, 115 bara, 515 ºC )was constructed in California (1992) and followed by many plants in Japan, Pakistan and Dubai.[12]The KC allows a better thermal matching with the waste heat source and with the cooling medium in the condenser achieving higher energy efficiency.Although the Kalina systems have the highest theoretical efficiencies, their complexity still makes them generally suitable for large power systems of several megawatts or greater.


Fig. 4. - H-T energy recovery in the Kalina Cycle11

In addition to these cycles, some advanced technologies in the research and development stage can generate electricity directly from heat. These technologies include the Stirling engine[13], thermoelectric, piezoelectric, thermionic, and thermo-photovoltaic (thermo-PV) devices. Although they could in the future provide additional options for carbon-free power generation, nowadays show very low efficiencies. Keeping in mind that a Carnot engine operating with a heat source at 150ºCand rejecting it at 25ºC is only about 30% efficient, all these system shows global efficiencies in the range 1-10%.As an example, in the piezoelectric power generation(PEPG), a thin-film membrane is used to create electricity from mechanical vibrations from a gas expansion/compression cycle fed by waste heat (150-200°C). The temperature change (across a semiconductor),inducing a voltage (through a phenomenon known as the Seebeck effect), is implemented in the Thermoelectric generation(TEG)[14].Öström and Karthäuser recently claimed a method for the conversion of low temperature heat to electricity and cooling,comprising COabsorption and an expansion machine[15].

Finally, recent R&D efforts in the use of saline solutions at different concentrations enabled the heat conversion into electricity in the lowest temperature range of application. This is possible by making use of heat engine based on Salinity Gradient Energy(SGE) (or Salinity Gradient Power, SGP) technologies.

Salinity Gradient energy is a novel non-conventional renewable energy related to the mixing of solutions with different salinity levels, as occurs in nature when a river discharges into the sea. Clearly,when this mixing process spontaneously occurs, the associated energy is completely dissipated during the process. Conversely,this energy can be harvested by adopting a suitable device devoted to perform a ”controlled mixing” of the two streams at different salinity (e.g. river water and seawater).Depending on the device type, different technologies have been proposed so far: the Chemical Engineering Research group of the University of Palermo, involved in this field of R&D activities[16], recently edited a book[17] where Pressure Retarded Osmosis (PRO), Reverse Electrodialysis (RED) and Accumulator mixing (AccMix) are indicated as the most promising technologies.

When employed within a closed loop,each SGP technology can be used to convert waste heat into electricity. This concept is named Salinity Gradient Power Heat Engine (SGPHE) (Figure 5) and consists of two main units:

  1. the SGP unit devoted to mixing two solutions at different salt concentration in order to convert the Gibbs free energy of the relevant salinity gradient into valuable power;
  2. a regeneration unit which employs unworthy waste heat at very low energy levels (i.e. 50-100°C) to separate again the two streams thus restoring the initial salinity gradient and closing the cycle.
Fig. 5 - Scheme of a SGP Heat Engine

The adoption of the closed loop opens room to a large variety of advantages and possibilities with respect to open-loop SGP technologies. Just as an example, the closed loop does not require the need of natural/artificial basins of solutions at different salt concentration in the same area. More important, no pre-treatments are necessary and any kind of solute or solvent can be employed with the aim of maximizing the power production and the cycle efficiency. In this regard, according to recent estimates, it appears that SGPHE (i) can be operated at very low temperatures where no alternative technologies exist and (ii) can potentially achieve exergetic efficiencies higher than any other technology[18].

[1] Global Waste Heat Recovery Market in oil and Gas Industry 2015-2019 by Infiniti Research Limited (2015).
[2]H.E.S. Fath, H.H. Hashem. Waste heat recovery of dura (Iraq) oil refinery and alternative cogeneration energy plant. Heat Recovery Systems and CHP 8, Issue 3, 1988, 265-270
[3]Oda & Hashem, 1990. Proposals for utilizing the waste heat from an oil refinery. Heat Recovery Systems and CHP, 10, Issue 1, 1990, Pages 71-77
[4]M.A. Doheim †, S.A. Sayed, O.A. Hamed. Energy analysis and waste heat recovery in a refinery. Energy
Volume 11, Issue 7, July 1986, Pages 691-696
[5]Bahram Saadatfar, Reza Fakhrai and TorstenFransson, JMES Vol 1 Issue 1 2013
[6]J. Larjola/lnt. J. Production Economics 41 (1995) 227-235
[7] H.C. Junga, Susan Krumdiecka, , , Tony Vranjes Feasibility assessment of refinery waste heat-to-power conversion using an organic Rankine cycle. Energy Conversion and Management. Volume 77, January 2014, Pages 396–407
[10] Development and Applications of ORegen Waste Heat Recovery Cycle. Development and Applications of ORegen Waste Heat Recovery Cycle
Andrea Burrato
© 2015 General Electric Company. All rights reserved
[13] A.V. Mehta, R.K. Gohil ,J.P. Bavarv, B.J. Saradava. Waste heat recovery using Stirling Engine. IJAET/Vol.III/ Issue I/ 2012/305-310
[15]US 20130038055 A1Method for conversion of low temperature heat to electricity and cooling, and system
[16]L. Gurreri et al., 2014. CFD prediction of concentration polarization phenomena in spacer-filled channels for reverse electrodialysis. Journal of Membrane Science, 2014, vol. 468, pag. 133-148.
  1. Tedesco et al., 2015. A simulation tool for analysis and design of reverse electrodialysisusing concentrated brines. Chem. Eng. Res. Des. 93, 441–456M.
  2. Tedesco et al., 2016. Performance of the first Reverse Electrodialysis pilot plant for power production from saline waters and concentrated brines. Journal of Membrane Science, 2016, 500, 33-45.
  3. Bevacqua et al., 2016.Performance of a RED system with Ammonium Hydrogen Carbonate solutions. Desalination and Water Treatment, in press. doi: 10.1080/19443994.2015.1126410
[17]A. Cipollina, G. Micale, Sustainable Energy from Salinity Gradients, 1st ed., Woodhead Publishing – Elsevier, 2016, isbn-9780081003121
[18]A. Tamburini, A. Cipollina, M. Papapetrou, A. Piacentino, G. Micale, Salinity gradient engines, Chapter 7 in: Sustainable Energy from Salinity Gradients, 1st ed., Woodhead Publishing – Elsevier, 2016, isbn-9780081003121

In-Situ Remediation of Soil, Sediments, and Groundwater Contaminated by Hazardous Substances

Author: Mauro Capocelli, Researcher, University UCBM – Rome (Italy)

1.Theme description


Highly polluted sites are present all over the world and particularly in countries that, in the last years, have seen uncontrolled and unplanned economic development. They are the result of earlier industrializationand poor environmental management practices that caused the alteration of groundwater and surface water, air quality, the hampering soil functions, and the polluting in general. In Europe there are about 500000 contaminated sites and two million of potentially contaminated sites. These are often made from the retired industrial, extractive and military activities[1].The U.S. Department of Energy (DOE) manages an inventory of sites including 6.5 trillion liters of contaminated groundwater (equal to about four times the daily U.S. water consumption) and 40 million cubic meters of soil and debris contaminated with radionuclides, metals, and organics. Some of the main contamination sources in this field are depicted in Figure 1[2].

Remediation represents the set of solutions such as the treatment, the containment or the removal/degradation of chemical substances or wastes so that they no longer represent an actual or potential risk to human health or the environment, taking into account the current and intended use of the site[3]. As described by EPA, any Remediation management plan considers complex systems involving different pollutants and polluted matrix and should include all the impacted environmental aspects such as air quality, noise, surface water, soil quality, ground water management, floraand fauna, heritage as well as social, structural and safer aspect. The dispersion of the Non Aqueous liquid phase (NAPL) in Figure 1 depends on the site geotechnical characteristics, the aquifer relative positions and the pollutant chemical properties. Sometimes the contamination sources succeeds in reaching the groundwater pollution, such as at solid waste landfills where chlorinated organic compounds reach the groundwater due to rainfall water leaching.


Figure 1 - Site contamination sources and mechanisms of dispersion

2.Techniques and technologies

Typical pollutants in this sector are aromatic hydrocarbons, heavy metals, pesticides as well as biological contaminants. The choice of a contaminated soil remediation technology is based oneconomic factors, the site-specific characteristics and of the remediation goal.Remediation technologies can be realized both on-site and off-site and act mainlyby Transformation (degradation of complex organic compounds to simpler intermediate, possibly up to the full mineralization) and removal from the contaminated matrix, typically for heavy metals, already in elemental form, which cannotbe further degraded. When these techniques cannot be accomplished or are too risky and expensive, immobilizationordinary Portland cement (OPC), water glass (sodium silicate), gypsum or organic polymers, for example acrylic or epoxy resins, covering with bentonite or polymeric membrane are the available options to ensure the isolation of the polluted site to reduce the water infiltration and the possible mobilization and migration of the elements.In this brief review, it is complicated to clearly distinguish the methods according to the contaminated matrix (being a phenomenon often multi-matrix&multi-pollutant) and to the possibility of realizing them close to the site or far away in centralized systems.Therefore, thetreatment are presented in relation to the technological nature of the process (physical-chemical, thermic and biological), as listed in Table 1.


 Table 1 - Remediation Processes

Due to the recalcitrant nature or the toxicity of the main pollutants,incompatible with biological systems, it is necessary to implement chemical methods to neutralize these substances: to convert into less harmful forms, less mobile, more stable and inert) the substances.Injection of chemical reductants, including calcium polysulphide, has been used to promote contaminant reduction and precipitation within aquifers. TheIn-situ Oxidation consists in injecting oxidants such as hydrogen peroxide (H2O2) into the contaminated aquifer.


Figure 2 - In Situ Oxidation of polluted groundwater

Contaminants that are well suited to remediation using this approach include metals with a lower solubility under reduced conditions (e.g. Cr (VI), through reduction to Cr(III) and precipitation of Cr(III) hydroxides). Advanced oxidation processes releasinghydroxyl radicals are the most affordable techniques to degrade organic recalcitrant pollutants. These include the use of H2O2, UV, O3, “Fenton reactants”, etc.


Figure 3 - OH attack to the aromatic ring.

 The physical treatments mainly consist in the separation of the pollutant.Alternatively, it is possible to isolate highly concentrated matrix to be eventually treated or sent to the final disposal. This solution avoids the addition of chemical reagents (and secondary pollutant formation) but should include costs for gas treating and for landfilling, especially for special waste.The air sparging is successfully applicable to volatile compounds (hydrocarbons and chlorinated solvents).Physical and geotechnicalcharacteristic of the soil as well as chemical properties of the pollutant are fundamental in the process analysis. The aquifer characteristic, if present, also influences the process. Natural zeolite has been studied extensively for remediation of heavy metal-polluted soils due to its wide availability and low cost.

The Pump-and-treatinvolves removing contaminated groundwater from strategically placed wells, treating the extracted water after it is on the surface to remove the contaminates using mechanical, chemical, or biological methods, and discharging the treated water to the subsurface, surface, or municipal sewer system. Water from the aquifer is pumped through the wells and piped to the pump-and-treat facilities, where contaminants are removed through an ion exchange that relies on tiny resin beads, resembling cornmeal, packed into large tanks or columns. As the water travels through the columns, hexavalent chromium ions cling to the resin beads and are removed from the water.[4]

Depending on the type of the reactive material and contaminants, the degradation may be complete ormay produces intermediates with different toxicity by the initial compounds.Therefore, very often the use of chemical-physical combined techniques (e.g. soil washing)could exploit the advantages of both.

While pump and treat of groundwater mainly include ex-situ treatments, Permeable Reactive Barriers (PRBs)can be used for the in-situ treatment of the waterscontaminated ground. As visible in Figure 4, a PRB consists of a continuous treatment zone, in its usual configuration, formed by the reactive material, installed in the subsoil in order to intercept the contaminated plume and induce the degradation of the contaminants from the mobile liquid phase.This technology is energy-saving since a reactive mediumwith a permeability higher than that of the surrounding soil has to be used[5],[6]. In this way, remediation occurs under the natural gradient of the aquifer, without additional energy contribution except the groundwater hydraulic head.PRBs are defined Permeable Adsorptive Barrier (PABs) when adsorbing material is used as reactive one and contaminant removal is carriedout by adsorption6. Recently, academic research is focusing on the investigation of innovative configurations, such as Discontinuous Permeable Adsorptive Barriers[7], which is arranged as a passive well array with one or more lines at a fixed distance one another and filled with adsorbing materials (Figure 5). Comparing Continuous and Discontinuous Adsorptive Barrier configurations it can be found that the decontamination of the same volume of groundwater can be carried out by reducing the amount of the barrier volume, and consequently by reducing remediation cost,if a Discontinuous barrier is used, highlighting the technology and cost-saving innovation of this advanced configuration[8].


Figure 4 - Schematic of aContinuous PRB


Figure 5 - Schematic of aDiscontinuous PAB7

The biological remediation methods (BioSparging, Landfarming) are available for high permeable and homogeneous soils for the mineralization or conversion of organic contaminants (SVE, BV, BTEX, light hydrocarbons, non-chlorinated phenols) into less toxic forms, or more toxic but less bioavailable. This process primarily exploit the ability of microorganisms transform the polluting material part in the biomass and partly into less complex molecules (eventually to minerals, carbon dioxide and water). These processes have been tried to remove heavy metals from soil as well, using biological leaching (bioleaching) or redox reactions.These methods are also non-invasive and can bring potential beneficial effect on the structure and fertility of the soil.In addition to microorganisms, plants can accumulate and degrade the contaminants in the so-called phytoremediation process. This recovery method, called phytoremediation, takes advantage of the complex interaction between root system of plants, microorganisms and soil, and represent the most sustainable solution in this sector. A review is given by Puldorf and Watson[9].A typical plant may accumulate about 100 parts per million (ppm) zinc and 1 ppm cadmium. Thlaspi caerulescens (alpine pennycress, a small, weedy member of the broccoli and cabbage family) can accumulate up to 30,000 ppm zinc and 1,500 ppm cadmium in its shoots, while exhibiting few or no toxicity symptoms. A normal plant can be poisoned with as little as 1,000 ppm of zinc or 20 to 50 ppm of cadmium in its shoots[10]. Phytoremediation has also been studied for degrading PCBs and PCDD/Fs [11].Some disposal methods for phytoremediation crops were proposed by Sas-Nowosielska et al.[12].The most beneficiary is to use phytoextraction crops for energy production hence pyrolysis, gasification or combustion. The fate of trace elements during combustion, pyrolysis, fluidized bed and downdraft gasification were studied in the recent scientific literature[13].

TheThermal methodscan induce the separation of the pollutant meansdesorption / volatilization and its destruction or immobilization by fusion of the solid matrix. In the desorption of pollutants from contaminated soil, a major research effort has been initiated to characterize the rate-controlling processes associated with the evolution of hazardous materials from soils[14]. The P.O.N. Research Project DI.MO.D.I.[15] was focused on the treatment of soils contaminated by hydrocarbons by an innovative device that could represent the solution to many logistical problems that make difficult the “on-site” treatment. The device (sketched in Figure 6) developed consists ina mobile unit, installed on truck, completely self-sufficient, able to permit emergency safety and remediation in reasonable short times and low cost actions. The treatment unit utilizes a dual fluidized bed reactortechnology fed by the hot gas produced by a hot gas generator. The upper bed is aimed at soil drying while the lower bed is aimed at soil remediation by thermal desorption. The processes of soil draying anddesorption of volatile and semi-volatile organic contaminantsoccur by the direct contact air/solid particles promoted by the fluidization technology. The soil requires a pre-treatment based on shredding/pulverizing and dimension separation, in order to feed the soil with the optimal size for fluidization. Particle removal from desorption gaseous flow stream is carried out by dust separator units (fabric filter and cyclone).


Figure 6 - Schematization of the DI.MO.D.I.treatment unit


Figure 7 - DI.M.O.D.I.treatment unit
W.W. Kovalick, Jr. Robert H. Montgomery.. Developing a Program forContaminated Site Management inLow and Middle IncomeCountries The World Bank
Ahmad I., Hayat S. and Pichtel J.(2005). Heavy Metal Contamination of Soil: Problems and Remedies. SciencePublishers, Inc. Enfield, NH, USA
Van Lynden, G.W.J. (1995). European soil resources. Current status of soil degradation, causes, impacts and need for action. Council of Europe Press. Nature and Environment, No 71, Strasbourg, France.
[3]EPA Guidelines for Environmental management of on-site remediation
[5]U.S. EPA, 1999. Field Applications of In Situ Remediation Technologies: Permeable Reactive Barriers, EPA, 542-R-99-002.
[6]Erto, A., Lancia A., Bortone I., Di Nardo A., Di Natale, M., Musmarra D., 2011. A procedure to design a Permeable Adsorptive Barrier (PAB) for contaminated groundwater remediation. Journal of Environmental Management, 92, 23-30.
[7] Bortone, I., Di Nardo, A., Di Natale, M., Erto, A., Musmarra, D., Santonastaso, G.F., 2013. Remediation of an aquifer polluted with dissolved tetrachloroethylene by an array of wells filled with activated carbon. Journal of Hazardous Materials, 260, 914–920.
[8] Santonastaso, G. F., Bortone, I., Chianese, S., Erto, A., Di Nardo, A., Di Natale, M., Musmarra, D., 2015. Application of a discontinuous permeable adsorptive barrier for aquifer remediation. A comparison with a continuous adsorptive barrier. Desalination and Water Treatment, doi: 10.1080/19443994.2015.1130921.
[9] Review article Phytoremediation of heavy metal-contaminated land by trees—a review I.D. Pulford*, C. Watson. Environment International 29 (2003) 529 – 540
[10]From US Department of Agriculture Phytoremediation: Using Plants To Clean Up Soils
[11] Campanella, Bock, C., Schroder, P., Phytoremediation: PCBs And PCDD/Fs Environmental Science and Pollution Research January 2002, Vol 9, Issue 1, pp 73-85
[12] A Sas-Nowosielska et al., Environmental PollutionVol. 128, Issue 3, 2004, 373-379.
[13] M. Šyc, M. Pohořelý, M.  Jeremiáš, M. Vosecký, P. Kameníková, S. Skoblia, K. Svoboda and M. Punčochář. Behavior of Heavy Metals in Steam Fluidized Bed Gasification of Contaminated Biomass. Energy Fuels, 2011, 25 (5), 2284–2291.M. Šyc et al., Willow trees from heavy metals phytoextraction as energy crops. Biomass and BioenergyVol. 37, 2012, 106–113. P. Vervaeke et al., Fate of heavy metals during fixed bed downdraft gasification of willow wood harvested from contaminated sites. Biomass and Bioenergy. Volume 30, Issue 1, 2006, 58–65.
[14] JoAnn S. Lighty,” Geoffrey D. Silcox, and David W. Pershing. Vic A. Cundy David G. Linz   Fundamentals for the Thermal Remediation of Contaminated Soils. Particle and Bed Desorption Models Environ. Sci. Technol. 1990, 24, 750-757. Marline T. Smith, M.T., Franco Berruti,and Anil K. Mehrotra. Thermal Desorption Treatment of Contaminated Soils in a NovelBatch Thermal Reactor Ind. Eng. Chem. Res.2001,40,5421-5430.
[15] Piano Operativo Nazionale Ricerca e Competitività 2007-2013, PON01_00599 “Dispositivo Mobile per Desorbimento Idrocarburi (DI.MO.D.I.)” – Consortium leader: Second University of Naples (Scientific Coordinator and Principal Investigator: Prof. Dino Musmarra)

Current Situation of Emerging Technologies for Upgrading Heavy Oils

Author: Marcello De Falco, Associate Professor, University UCBM – Rome (Italy)

1.Theme description

The change in the average crude oil quality due to the scarcity of light oil reserves and to the increase of the use of shale oil, oil sand and bitumen is causing significant troubles to refineries, which are obliged to accept heavier feeds with very different physical properties (lower API gravity, higher amount of impurities). This has stimulated the development of new technology for upgrading the heavy and extra-heavy oils in order to improve their characteristics and, consequently, the refineries performance.

Heavy oils are classified as oil with a API gravity within the range 10°- 22°, whereas extra-heavy oils have a API gravity < 10°. Heavy oils and bitumen reserves geographical distribution is reported in Table 1 [1]: these reserves are continuously increasing, replacing the light oil ones, and Oil&Gas companies found and developed competitive solutions to extract and treat these oils.

Table 1 - Geographical distribution of heavy oil and bitumen reserves.

Clearly, there is the need to upgrade the heavy oils before feeding to the refineries, in order to improve the downstream products quality and to increase the topping distillate flows: the conventional upgrading processes include carbon rejection and hydrogen addition technologies. But, when the properties of the heavy and extra-heavy oils are critical, more effective solutions are needed to make the oil suitable for refineries feedstock. For this reason, researchers and industries are proposing a number of innovative solutions and some of them are already in the full-scale demonstration phase.

In the following, some of these new emerging oil upgrading configuration are presented. For a complete list of developed technologies, the author suggests the Castaneda, Munoz and Ancheyta review paper[2], which includes the description and comparison of 23 new processes.


2.Heavy-to-Light heavy oil upgrading process

The Heavy-to-Light (HTL) process is patented by Ivanhoe Energy[3], a company recently acquired by FluidOil[4].

The process is based on a circulating transport bed of hot sand to heat the heavy feedstock and convert them to lighter products. Then the upgraded products and the sand are separated in a cyclone and the products are quenched and routed to the atmospheric distillation unit.

The main benefits of the HTL configuration are that it can be integrated at the well-head and it is simple and cheap. The drawbacks are the large dimension of the equipment, the low volumetric yield of upgraded crude, the low capacity for extra-heavy oil processing, the high formation of coke and a low sulfur content reduction. At the exit of the upgrading plant, the oil reaches a API gravity of 18-19° and a 100°C kinematic viscosity of 23 cSt.

The technology development has been completed and Ivanhoe Energy is designing industrial plants in Canada, Latin America and the Middle East.

Figure 1 - Ivanhoe Energy's HTL test facility in San Antonio, Texas[5].

3.HCAT process

HCAT is a catalytic heavy oil upgrading technology developed by Headwaters Technology Innovations Group (HTIG)[6]. The process is composed by a catalytic reactor where a molecule sized catalyst is packed, assuring high conversion of the heavy oil. The main benefits of HCAT configuration are constant product quality, feedstock flexibility and flexible and high conversion (up to 95%).

Neste Oil Corporation’s Porvoo Refinery at South Jordan, Utah, is the first refinery which implements, in 2011, the HCAT heavy oil upgrading technology[7]. More than 500.000 barrels of heavy oil are processed in their upgrading reactors every day and a refinery additional capacity of 200.000 barrels per day has been reached.


4.Viscositor process

Viscositor technology is patented by the Norwegian company Ellycrack AS[8] and it is based on the atomization of the heavy oil by means of a heated sand in a high-velocity chamber. Basically, the process is composed by the following steps (refer to the block diagram shown in Figure 2):

  • sand particles, heated up by coke combustion, are pneumatically conveyed into a collision reactor by means of the hot combustion gases;
  • pre-heated heavy oil is fed to the reactor and collide with the sand particles, evaporating and cracking;
  • the solid particles, the coke generated during the collision process and the oil stream are separated in a cyclone. The solid phase is sent to a regenerator while the oil stream is fed to a dual condensation system. The generated coke is used to supply the heat duty.

The advantages of the process are the low temperature and the low pressure required, almost self-sustained thanks to the coke formation in the reactor and the good quality of the final upgraded oil.

Figure 2 - Block diagram of the Viscositor process

5.IMP process

IMP configuration[9] is a catalytic hydrotreatment-hydrocracking process of heavy oil at mild operating conditions, able to achieve high removal of metals, sulfur compounds, asphaltenes and a large conversion of the heavier share of the oil stream to more valuable distillates.

The most important characteristic of the IMP process is the low fixed investment required and the low operating costs, with an attractive return of investment.

The IMP technology can be applied both for conversion of heavy and extra-heavy oils to intermediate oils and as a first processing unit for heavy and extra-heavy crude oils in a refinery. The final properties of upgraded oils, depending on the heavy oil feedstock, are: API gravity = 22-25°; sulfur content = 1.1 - 1.15 wt%; C7 asphaltenes = 4.7 - 5.3.

A first industrial unit application is being analyzed by Petroleos Mexicanos (PEMEX).


6.Nex-Gen Ultrasound technology

Nex-Gen is an innovative process for heavy oil upgrading which uses ultrasonic waves to break the long hydrocarbon chains and simultaneously adds a hydrogen stream.

Basically, Nex-Gen is a cavitation process: the ultrasonic energy forms cavitation bubbles in the heavy oil stream; then, the bubbles tends to collapse at high temperature and pressure, causing the breaking of the long chain of heavy hydrocarbon molecules.

The next figure shows a scheme of the Nex-Gen configuration.

Figure 3 - Scheme of NexGen architecture[10].

A first industrial plant is going to be designed to be integrated near the Athabasca tar sands (Edmonton, Alberta), with a capacity of 10.000 barrels per day. The mild operating conditions (temperature = 0 - 70°C, pressure = 1 - 5 bar) allows a reduction of energy consumption and operating and maintenance costs by 50%.

7.Chattanooga process

Chattanooga process is a continuous process based on a fluidized bed reactor operating at high pressure and temperature in a hydrogen environment.

The main equipment of the configuration is the pressurized fluid bed reactor and associated fired hydrogen heater. The reactor can continuously convert oil by thermal cracking and hydrogenation into hydrocarbon vapors while removing spent solids[11].

The energy requirements associated with the Chattanooga configuration are significantly reduced in respect to the traditional heavy oil upgrading technologies, as well as operating costs and capital costs.


Figure 4 - Chattanooga technology description11.
[3] Patent No. US 8,105,482 B1 (January 2012).
[6] Patent No. US 7,578,928 B2 (August 2009),
[8] Patent No. US 6,660,158 (December 2003).
[9] Patent No. US 7,651,604 B2 (January 26, 2010).
[10] technology upgrading.html, Heavy to light upgrading project, Revolutionary upgrading technology converting extremely heavy crude oil to light sweet crude oil.

Sealing Materials for Well Integrity

Author: Marcello De Falco, Associate Professor, University UCBM – Rome (Italy)

1.Theme description


Well integrity is defined in the NORSOK D-010[1] (a functional standard which fixes minimum requirements for equipments of the oil and gas production wells) as "application of technical, operational and organizational solutions to reduce risk of uncontrolled release of formation fluids throughout the life cycle of a well".

Basically, technologies for well integrity include many aspects about well operating processes, well services, tubing and wellhead integrity, safety system testing, etc..

Clearly, production tubes have the greatest probability of failure since they are exposed to corrosive elements from the produced fluids. Moreover, the production tubing consists of many connections, which are points of weakness with high risk of leak. International standards impose the installation of two well barriers between the reservoirs and the environment in order to prevent the loss of containment.

In this paper, among the components of the production tube sealing system installed to avoid fluid losses, the innovative sealing materials are assessed and compared.

The most common used sealing material is the cement, which is a fully known and cheap materials. But, there are many properties not ideal for handling well integrity issue as, for example, gas migration through its structure, long term degradation due to temperature and chemical substances exposure, shrinking, etc.

The following figure shows the main problem in applying cement as sealing material in well casing[2].

  fig 1  
Figure 1 - Cement technical drawbacks in sealing application2: a), b), f) leak paths due to poor bonding between cement and casing/formation; c) fluids migration due to cement fracturing; d) leakages occurring for casing failure; e) flow path through the cement layer due to gas migration during hardening.

For this reason, alternative materials for sealing are studied in order to overcome the issues related to the cement application.

Such materials have to assure a series of properties, among which:

  • low permeability;
  • capacity of bonding to the casing and to the borehole;
  • pumpable without excessive costs;
  • chemically inert and not-reactive with chemical substances present in the formation;
  • self-levelling in the well;
  • safe to be handled and cheap.

An exhaustive list of the most interesting alternative materials is reported in 2. In the following, the most interesting ones (ThermaSet, Sandaband and Ultra Seal) are presented and described.



  ThermasetR is a polymeric based resin used to solve a series of well integrity issues, as lost circulation, compromised wellbore integrity, plug and abandonment, and the remediation of sustained casing pressure[3],[4]. As a liquid, ThermaSet is easily pumped and injected since it not contains solid particles. However, particles can be added to accurately modulate the liquid density. Compared to cement, ThermaSet has a higher compressive and tensile strength, thus improving the sealing material mechanical properties and its behavior under the variable loads which could be caused by pressure and temperature cycles that cause the casing to expand and contract, exerting a force on the annulus material. In the following table, the ThermaSet and a typical cement (class G Portland) properties are compared[5],[6], attesting the improved characteristics of the innovative material.  


Table 1 - Mechanical properties comparison between ThermaSet and Portland cement.

  The excellent properties of the material are maintained over time, without showing significant decays: Figure 2 shows the compressive strength value after 1 year under a crude oil pressure equal to 500 bar, demonstrating that its value stabilizes at a value within the range 40-45 MPa6.   fig 2
Figure 2 - ThermaSet compressive strength evolution over time after long-term exposure to crude oil at 500 bar.
Moreover, various experimental tests demonstrated that ThermaSet has, also for long-term test, a low permeability[7].  


Sandaband is a patented material[8], owned by Sandaband Well Plugging (SWP), consisting of 70% to 80% quartz solids with a variable grain size diameter (between 1 µm and 2 mm)[9]. The rest of the volume is composed by water and chemicals that make the material easily pumpable.

All materials composing Sandaband are chemically stable, with no degradation over time or reaction with other chemicals.

An important property is that Sandaband behaves like a Bingham plastic material, characterized by the fact that it needs a shear stress to start flowing and then has a linear dependence between shear stress and strain, thus allowing that the materials quickly form a rigid body as the pumping is stopped (refer to Figure 3).

Figure 3 - Bingham liquid behaviour compared to a Newtonian fluid.
  Sandaband has a series of unparalleled properties, making it excellent for the application of sealing material for well integrity[10]:
  • Long term integrity
  • Bonds to steel
  • Removable
  • Ductile
  • Non shrinking
  • Cost effective
  • Chemically inert
  • Gas-tight
  • Pumpable
  • Environmentally safe
  • No health hazards
  • Verifiable
  • HPHT resistant
  • No reservoir damage
  • Non-erosional
  Tests demonstrated the long-term integrity in the temperature range -10°C to 250°C, the low permeability under operating conditions, the absence of effects on the gas- tightness for casing moving and vibration.  


Figure 4 - Sandaband handling.
  The innovative materials has been tested on field for a Temporary P&A (Plug and Abandonment) (BP Norway Ula Well 2007) and for a Permanent P&A (Det Norske Oljeselskap).  

4.Ultra Seal

Ultra Seal, developed by CSI Technologies[11], is a material composed by a resin and a hardener, modulated to make the sealant pumpable. Resin and hardener are mixed on the surface in a conventional mixing equipment and clean-up is with a minimal quantity of a methanol and water mixture. Ultra-Seal R is liquid, thus permitting a more precise mixing than Portland cement. The material is characterized by low permeability and excellent mechanical properties.  
Figure 5 - Ultra Seal
  [1] [2] Dickson Udofia Etetim, " Well Integrity behind casing during well operation.Alternative sealing materials to cement", Norwegian University of Science and Technology, Department of Petroleum Engineering and Applied Geophysics [3] [4] [5] Wellcem AS. ThermaSet Test Report, 2001. [6] [7] [8] U. .S Patent # 6,715,543; U.S. Patent # 7,258,174 [9] [10] [11]"Ultra Seal"

Solar Refinery

Author: Vincenzo Piemonte, Associate Professor, University UCBM – Rome (Italy)   

1.Theme description

  The purpose of a solar refinery is to enable an energy transition from today’s ‘fossil fuel economy’ with its associated risks of climate change caused by CO2 emissions, to a new and sustainable ‘carbon dioxide economy’ that instead uses the CO2 as a C1 feedstock, together with H2O and sunlight, for making solar fuels.   Fig.1 - Scheme Futuristic solar refinery
Figure 1 - Scheme of a  ‘solar refinery’ for making fuels and chemicals from CO2, H2O and sunlight[1]

On an industrial scale, one can visualize a solar refinery (see Figure 1) that converts readily available sources of carbon and hydrogen, in the form of CO2 and water, to useful fuels, such as methanol, using energy sourced from a solar utility. The solar utility, optimized to collect and concentrate solar energy and/or convert solar energy to electricity or heat, can be used to drive the electrocatalytic, photoelectrochemical (PEC), or thermochemical reactions needed for conversion processes. For example, electricity provided by PV cells can be used to generate hydrogen electrochemically from water via an electrocatalytic cell.

However, hydrogen lacks volumetric energy density and cannot be easily stored and distributed like hydrocarbon fuels. Therefore, rather than utilizing solar-generated hydrogen directly and primarily as a fuel, its utility is much greater at least in the short to intermediate term as an onsite fuel for converting CO2 to CH4 or for generating syngas, heat, or electricity.  Reacting CO2 with hydrogen not only provides an effective means for storing CO2 (in methane, for example), it also produces a fuel that is much easier to store, distribute, and utilize within the existing energy supply infrastructure.

The idea of converting CO2 to useful hydrocarbon fuels by harnessing solar energy is attractive in concept. However, significant reductions in CO2 capture costs and significant improvements in the efficiency with which solar energy is used to drive chemical conversions must be achieved to make the solar refinery a reality.

    Fig.2 - Schematic Diagram
Figure 2 - Schematic diagram of integrated PV-Hydrogen utility energy system

Solar energy collected and concentrated within a solar utility can be harnessed in different ways: (1) PV systems could convert sunlight into electricity, which in turn, could be used to drive electrochemical cells that decompose inert chemical species such as H2O or CO2 into useful fuels (see figure 2); (2) PEC or photocatalytic systems could be designed wherein electrochemical decomposition reactions are driven directly by light, without the need to separately generate electricity; and (3) photothermal systems could be used either to heat working fluids or help drive desired chemical reactions such as those connected with thermolysis, thermochemical cycles, etc. (see Figure 3). Each of these approaches can be used to generate environmentally friendly solar fuels that offer “efficient production, sufficient energy density, and flexible conversion into heat, electrical, or mechanical energy”[2]. The energy stored in the chemical bonds of a solar fuel could be released via reaction with an oxidizer, typically air, either electrochemically (e.g., in fuel cells) or by combustion, as is usually the case with fossil fuels. Of the three approaches listed here, only the first (PV and electrolysis cells) can rely on infrastructure that is already installed today at a scale that would have the potential to significantly affect current energy needs. In fact, the PEC and photothermal approaches, though they hold promise for achieving simplified assembly and/or high energy conversion efficiencies, require considerable development before moving from the laboratory into pilot-scale and commercially viable assemblies.


Fig.3 - Solar Driven

Figure 3 - Solar-Driven, Two-Step Water Splitting to Form Hydrogen Based on Reduction/Oxidation Reactions

2.Carbon Dioxide-derived fuels

The CO2 concentrations in the atmosphere are still low enough (0.04%) that it would be impractically expensive to capture and purify CO2 from the atmosphere, but other sources of CO2 are available that are considerably more concentrated. Power generation based on natural gas or coal combustion is responsible for the major fraction of global CO2 emissions, with other important sources being represented by the cement, metals, oil refinery, and petrochemical industries[3]. Indeed, a growing number of large-scale power plant carbon dioxide capture and storage (CSS) projects are either operating, under construction, or in the planning stage, some of them involving facilities as large as 1,200 MW capacity[4]. While solar PV energy conversion has the potential to reduce CO2 emissions by serving as an alternative means of generating electricity, harnessing solar energy to convert the CO2 generated by other sources into useful fuels and chemicals that can be readily integrated into existing storage and distribution systems would move us considerably closer to achieving a carbon-neutral energy environment.

Herron et al.[5], in a very recent review, examine the main routes for CO2 capture from stationary sources with high CO2 concentrations derived from post-combustion, precombustion, and oxy-combustion processes.

In post-combustion, flue gases formed by combustion of fossil fuels in air lead to gas streams with 3%–20% CO2 in nitrogen, oxygen, and water. Other processes that produce even higher CO2 concentrations include pre-combustion in which CO2 is generated at concentrations of 15%–40% at elevated pressure (15–40 bar) during H2 enrichment of syngas via a water–gas shift reaction (WGS — see Figure 1) and oxy-combustion in which fuel is combusted in a mixture of O2 and CO2 rather than air, leading to a product with 75%–80% CO2. CO2 capture can be achieved by absorption using liquid solvents (wet-scrubbing) or solid adsorbents.

In the former approach, physical solvents (e.g., methanol) are preferred for concentrated CO2 streams with high CO2 partial pressures, while chemical solvents (e.g., monoethanolamine -MEA) are useful in low-pressure streams.

Energy costs for MEA wet-scrubbing are reportedly as low as 0.37–0.51 MWh/ton CO2 with a loading capacity of 0.40 kg CO2 per kg MEA. Disadvantages of this process are the high energy cost for regenerating solvent, the cost to compress captured CO2 for transport and storage, and the low degradation temperature of MEA. Alternatives include membrane and cryogenic separation. With membranes there is an inverse correlation between selectivity and permeability, so one must optimize between purity and separation rate.

Cryogenic separation ensures high purity at the expense of low yield and higher cost. Currently, MEA absorption is industrially practiced, but is limited in scale: 320–800 metric tons CO2/day (versus a CO2 generation rate of 12,000 metric tons per day for a 500 MW power plant). Scale-up would be required to satisfy the needs of a solar refinery.

Alternatives, such as membranes, have relatively low capital costs, but require high partial pressures of CO2 and a costly compression step to achieve high selectivity and rates of separation.

A very important point to consider about solar refinery reliability is that since carbon capture reduces the efficiency of power generation, power plants with carbon capture will produce more CO2 emissions (per MWh) than a power plant that does not capture CO2. Therefore, the cost of transportation fuel produced with the aid of CO2 capture must also cover the incremental cost of the extra CO2 capture[6]. These costs must then be compared to the alternative costs associated with large-scale CO2 sequestration. Finally, one also needs to consider the longer-term rationale for converting CO2 to liquid fuels once fossil-fuel power plants cease to be major sources of CO2. Closed-cycle fuel combustion and capture of CO2 from, e.g., vehicle tailpipes, presents a considerably greater technical and cost challenge than capture from concentrated stationary sources.


3.Challenges & Opportunities

Christos Maravelias and colleagues from the University of Wisconsin have recently modeled and analyzed the energy and economic cost of every step and each alternative technology contained in a solar refinery[7]. The result is a general framework that will allow scientists and engineers to evaluate how various improvements in materials’ manufacturing and processing technologies that enable carbon dioxide capture and conversion to fuels using solar, thermal and electrical energy inputs would accelerate the development, influence the cost and impact the vision of the solar refinery. It will also enable evaluation of which alternative technologies are the most economically feasible and should be targeted or highlight those that even if developed would still be hopelessly uneconomic and can therefore be ruled out immediately.

The view that emerges from this techno-economic evaluation of building and operating a solar refinery is one of guarded optimism. On the subject of energy efficiency, it is clear that solar powered CO2 reduction is currently lagging far behind that of solar driven H2O splitting and more research is needed to improve the activity of photocatalysts and the efficacy of photoreactors. In the indirect process of transforming CO2/H2O to fuels, it is apparent that if the currently achievable solar H2O-to-H2 conversion (>10%) can be matched by solar CO2/H2-to-fuel conversion efficiencies, through creative catalyst design and reactor engineering, this would represent a promising step towards an energetically viable solar refinery. For the process that can directly transform CO2/H2O to fuels, improvements in conversion rates and product selectivity are key requirements for achieving energy efficiency in the solar refinery.

Economic efficiency is also a key to the success of the solar refinery of the future. For currently achievable CO2 reduction rates and efficiencies, the minimum selling price of methanol, a representative fuel, was evaluated by the techno-economic analysis and turned out to be more than three times greater than the industrial selling price analysis, even though the cost of the CO2 reduction step, which is estimated to be quite costly, was not included in the estimates. Improvement in the activity of CO2 reduction photocatalysts by several orders of magnitude would have a significant impact on the energy and economic costs of operating a solar refinery.

It is clear that the cost and energy efficiency of carbon capture and storage is an area where big improvements need to be made if the solar refinery is to be a success.  One other point that is worth highlighting is the availability of water, since in some parts of the world the availability of water could be a big problem to face up.

To conclude, multidisciplinary teams of materials chemists, materials scientists, and materials engineers across the globe believe in the dream of the solar refinery and a sustainable CO2 based economy. Anyway it is clear that developing models to evaluate the energy efficiency and economic feasibility of the solar refinery, and at the same time identifying hurdles which have to be surmounted in order to realize the competitive processing of solar fuels, will continue to play a crucial role in the development of the required technologies.

[1] Reproduced from A general framework for the assessment of solar fuel technologies, Energy & Environmental Science, DOI: 10.1039/C4EE01958J with permission of The Royal Society of Chemistry.
[2] Jooss, C. and H. Tributch. “Chapter 47: Solar Fuels” Fundamentals of Materials for Energy and Environmental
Sustainability. D.S. Ginley and D. Cahen, editors. Cambridge University Press. (2011).
[3] Carbon Dioxide Emissions. United States Environmental Protection Agency.
[5] Herron, J.A., J. Kim, A.A. Upadhye, G.W. Huber, C.T. Maravelias. “A General Framework for the Assessment of Solar Fuel Technologies.” Energy Environ. Sci. (2015). 8, 126-157.
[6] Randall Field, MIT Energy Initiative, personal communications.
[7] Energy and Environmental Science, 2014, DOI: 10.1039/c4ee01958j

Water Treatment in Unconventional Gas Production

Author: Vincenzo Piemonte, Associate Professor, University UCBM – Rome (Italy)

1.Theme description

 The average $3 million drilling and fracturing process required for each well uses an average of 4.2 million gallons of water, much of which has traditionally been freshwater. The volume of water can vary significantly and is highly dependent on the length of the drilled lateral[1].

More than 99 percent of the fracturing fluid is water and sand, while other components such as lubricants and bactericides constitute the remaining 0.5 percent. This fracturing mixture enters the well bore, and some of it returns as flowback or produced water, carrying with it, in addition to the original materials, dissolved and suspended minerals and other materials that it picks up in the shale[2].

Figure 1 - Volumetric composition of process water in shale gas production.

Once in production for several years, natural gas wells can feasibly undergo additional hydraulic fracturing to stimulate further production, thereby increasing the volume of water needed for each well. Approximately 10-25 percent of the water injected into the well is recovered within three to four weeks after drilling and fracturing a well. Water that is recovered during the drilling process (drilling water), returned to the surface after hydraulic fracturing (flowback water), or stripped from the gas during the production phase of well operation (produced water) must be properly disposed2.

The recovered water contains numerous pollutants such as barium, strontium, oil and grease, soluble organics, and a high concentration of chlorides. The contents of the water can vary depending on geological conditions and the types of chemicals used in the injected fracturing fluid. These wastewaters are not well suited for disposal in standard sewage treatment plants, as recovered waters can adversely affect the biological processes of the treatment plant (impacting the bacteria critical to digestion) and leave chemical residues in the sewage sludge and the discharge water. Many producers have been transporting flowback and produced water long distances to acceptable water treatment facilities or injection sites. But deep well injection now also meeting challenges.

The water disposal challenge has spurred a new water treatment industry in the region, with entrepreneurs and established companies creating portable treatment plants and other innovative treatment technologies to help manage produced water mainly focuses to water reuse.


Figure 2 - Potential beneficial reuses of process water in the oil&gas industry.

2.Water Costs and Quality Concerns

Dealing with water scarcity and wastewater (i.e., brine) quality are top priorities in shale and tight gas production. Doing this requires water reuse technology that reduces the waste stream by efficiently separating out salts, heavy metals and nutrients to produce recovered water. Effective filtration must eliminate suspended solids from salt water going to deep well injection.

Cost can be an overriding factor in water treatment and processing decisions. There certainly are  environmental considerations involved in using chemicals to perform operations such as frac- water treatment or salt removal and recovery. However, the cost of mitigating chemistry also comes into play. Chemical friction reducers make source water slicker for faster pumping, and then specialty chemicals like biocides, which kill microorganisms, and scale inhibitors, which control deposits, are added to the water. Mobile ultrafiltration technology can reduce the need for biocides – and the cost of treatment.

Slick water fracturing and horizontal drilling were revolutionary developments that made it economically viable to extract unconventional gas on a grand scale. Fracturing lowered the cost of moving the gas to the well bore, while horizontal drilling – which covered a vastly greater expanse of territory than a single vertical probe – exponentially increased the amount of gas that could be withdrawn. It became much more profitable to put wells into shale gas formations, but the cost of doing that business today depends, in no small part, on what ultimately happens to the brine. That, in turn, depends on geography. Chemical treatment is not the challenge so much as affordability; most brine is just discharged to disposal wells, but the fewer of these wells there are, the greater the production expense incurred, and in some parts of the country, geology or the lack of water makes disposal wells unfeasible.

In geographical areas, like Pennsylvania where there are major shale gas deposits, where the geology won’t allow disposal wells, the brine has to be trucked out for disposal elsewhere or cleaned for reuse or discharge. Not only is transportation potentially dangerous, it’s also expensive; trucking the frac-water from eastern Pennsylvania to Ohio for deep well disposal costs from $1.50 to $2.00 per barrel to dispose of produced water at the injection well plus getting the wastewater to the injection well requires many trucks each costing about $100/hour on an estimated six-hour typical trip in eastern Pennsylvania. Evaporation and crystallization technologies can recover almost all of the produced water as pure distilled water and create a salable salt product for uses such as road de-icing or grey water softening, but that adds another, higher level of costs. In the West, where water often can be inexpensive but scarce, it makes much more economic sense to clean up the wastewater and then sell it for land application[3].


3.Guidelines for technology selection

 In order the select the more suitable technology for water treatment there are issues related to the condition, as well as the cost, of water that must be addressed. Here are some of the principal ones:

  • Most surface water used for fracking is fresh water, and this surface water has variable quality, so ultrafiltration is an effective way to treat this influent source.
  • Bacteria, corrosion and the buildup of solids in storage tanks are problems for disposal well management to solve.
  • While technical obstacles involved in salt concentration can be overcome through membrane and thermal processes, chemical pre-treatment to remove oil and grease from the brine before it passes through the membranes is a challenge on a case-by-case basis.
  • Reuse and recovery options make unconventional gas development sustainable, but they also involve handling more wastewater, so integrated discharge water management and reuse solutions are necessary for safe and efficient treatment and recycling.
  • The presence of Naturally Occurring Radioactive Materials, or NORMs, in frac flowback and produced water can contaminate the salt product created by crystallization. Pretreatment of brine can remove NORMs such as radium.
  • Brine disposal into evaporative and wastewater ponds is getting a great deal of critical attention, so it is important to put a cleaner disposal product into the ponds or somehow reduce industry dependency upon them.
  • Because industry operators do not just stay in fixed locations, but frequently move from site to site to drill the most promising gas plays, water treatment systems should be mobile[4].

4.Some Research Project

While progress has been made on the water quantity and quality impacts of shale gas development, challenges remain, including the potential cumulative long-term water impacts of the industry. Therefore, additional water research and environmental policy changes will be necessary in order to fully realize the economic opportunity of the region’s natural gas wealth while safeguarding the environment.

In the following there are reported some interesting research project focused on water reuse.

  Project 1: Advancing a Web Based Decision Support Tools (DST) for Water Reuse in Unconventional O&G Development[5]

The objective of this project is the development of database and a decision support tool (DST) selecting and optimizing water reuse options for unconventional O&G development with a focus on Flowback and Produced Water Management, Treatment and Beneficial Use for Major Shale Gas Development Basins.

  • Funding agency: US DOE-RPSEA
  • Start date: 1/2012
  • End date: 1/2016
  • Funding: $286,984
  Project 2: Engineered Osmosis for Advanced Pretreatment of O&G Wastewater[6]

The objective of this project is further develop and optimize engineered osmosis membranes and systems for treatment of unconventional O&G wastewater (see figure 3). As main project outcomes there are:

  • Field test the engineered osmosis process on drilling and produced waters in the DJ Basin
  • Develop process design tools and life cycle assessment
  • Funding agency: US DOE-RPSEA
  • Start date: 9/2011
  • End date: 6/2015
  • Funding: $1,323,805


Figure 3 - Engineered osmosis process scheme

  Project 3: Advanced Biological Pretreatment[7]

The objective of this project is the development and evaluation of cost-effective pre-treatment technologies for O&G wastewater with emphasis on biological filtration. The  major outcomes and outputs are the substantial removal of dissolved organic carbon (96%) and chemical oxygen demand (89%) in produced water from the Piceance and Denver-Julesburg basins

  • Funding agency: NSF/SRN
  • Start date: 10/2012
  • End date: 9/2017
  • Funding: $1,400,390 to CSM
[1] Yoxtheimer, Dave. “Potential Surface Water Impacts from Natural Gas Development.” pg.5.­‐24-­‐11.pdf
[2] Hammer, Rebecca and Jeanne VanBriesen. “In Fracking’s Wake: New Rules are Needed to Protect Our Health and Environment from Contaminated Wastewater,”pg. 11. May 2012.­‐Wastewater- FullReport.pdf
[3] July 2011, Journal of Petroleum Technology, p. 50, “Flowback to Fracturing: Water Recycling Grows in the Marcellus Shale”, by Stephen Rassenfoss, JPT Online Staff Writer

Thin Film Membrane Technology: Advances in Natural Gas Treatment

Author: Marcello De Falco, Associate Professor, University UCBM – Rome (Italy)

1. Theme description

Natural gas (NG) treatments are the processes needed to sweeten and purify the extracted NG before feeding it to the grid. Such processes are crucial to reach the gas purity targets and constitute a large fixed and operative costs for the NG production sector.

The main components to be removed in the NG purification process are the acid gases as carbon dioxide (CO2) and hydrogen sulphide (H2S) and, in many cases, the nitrogen (N2).

As reported in the following table, the contents of such components in the extracted NG stream can be high, leading to challenging and expensive separation processes.

  Groningen (Netherlands) Laeq (France) Uch (Pakistan) Uthmaniyah (Saudi Arabia) Ardjuna (Indonesia)
CH4 81.3 69 27.3 55.5 65.7
C2H6 2.9 3 0.7 18 8.5
C3H8 0.4 0.9 0.3 9.8 14.5
C4H10 0.1 0.5 0.3 4.5 5.1
C5+ 0.1 0.5 - 1.6 0.8
N2 14.3 1.5 25.2 0.2 1.3
H2S - 15.3 - 1.5 -
CO2 0.9 9.3 46.2 8.9 4.1
Table 1 - Composition of natural gas reservoirs (%vol)[1]
  Traditionally, the separation is carried out by means of:

1-Absorption processes, by which the components to be separated are absorbed on a liquid solvent in a packed column and then separated in the solvent regeneration step. The absorption of the component on the solvent can be chemical (chemical absorption) or physical (physical absorption). A wide applied absorption industrial process is the ammine separation (MDEA) unit for acid gases removal[2].

2- Adsorption processes, where selected components are adsorbed on the solid surface of specific particles. Then, increasing the solid bed temperature (Thermal Swing Adsorption - TSA) or reducing the pressure (Pressure Swing Adsorption - PSA), the gas is extracted and the solid is regenerated. The most applied adsorption process is the PSA, used to remove CO2 from natural gas streams by solid materials with a high affinity to carbon dioxide[3].

3- Cryogenic processes, known as low temperature distillation, which uses a very low temperature for purifying gas mixtures in the separation process exploiting the different gas components volatility. It is not applied for the acid gas removal from natural gas due to the low concentrations needed that makes the application of this technique not economical.But, a growing interest is given to separation processes using selective membranes thanks to their ease of operation, flexibility, smaller footprint and lower capital requirements.Basically, a membrane allows the transfer of certain components but not of the others, thus leading to a separation. A schematic layout is reported in Figure 1.

Figure 1 - Conceptual layout of a membrane-based separation process, with a membrane selective to component A.

Compared to the other natural gas separation techniques, the membrane process needs a lower energy requirement since it does not involve any phase transformation. Moreover, the process equipment is very simple with no moving parts, compact, relatively easy to operate and control, and also easy to scale-up and scale-down[4].In order to be applied in an industrial process, a selective membrane must have the following properties:

  • high permeability, leading to a high flux of the separated component through the membrane thickness;
  • high selectivity;
  • high mechanical and thermal resistance at the separation unit operating conditions;
  • the chemical resistance in the environment where the membrane is placed;
  • low cost and long durability.

The permeability increases reducing the selective layer thickness, but at the same time, both the selectivity and the mechanical resistance are penalized with thin membranes. Therefore, the membrane design requires an accurate optimization. Usually, the applied membranes are composite and fabricated depositing a thin selective layer on a support able to assure the needed mechanical properties (refer to Figure 2).

fig2Figure 2 - Composite membrane architecture

In the following, some examples and applications of membrane applications for CO2, H2S and nitrogen removal from natural gas are reported.

2.Membrane application for CO2 removal

Carbon dioxide is the largest contaminant found in natural gas and, for this reason, a strong effort has been devoted to discover solutions to apply selective membranes in the CH4/CO2 separation process.Currently, the only commercial membranes applied for CO2 removal are polymeric, made by cellulose acetate, polyimides, polyamides, polysulfone, polycarbonates and polyetherimide[5]. The most widely used material is cellulose acetate as used in UOP’s membrane systems: the Separex membrane system has been applied in a number of large NG plants installed worldwide (refer to Figure 3)[6],[7]. Another widely applied commercial product is the cellulose tri-acetate (CTA) membrane developed by Cameron and called CYNARA[8]: such a membrane is applied in world’s largest CO2 membrane plant for natural gas clean-up (700 MMcf/d).

Figure 3 - Separex membrane skids installed at an EOR project in Latin America6.

Also Air Liquide has developed a membrane module for the purification of NG by removal of CO2, H2S and water steam[9]. The system is called MEDALTM and is able to reach the pipeline specification of 2 - 5% CO2 and 4 ppm H2S. Moreover, the membrane unit can be also used as a pre-treatment, removing the majority of CO2 and H2S, followed by a typical amine process to further remove carbon dioxide.Another product is proposed on the market by ProSep[10]: the membrane is fabricated in a flat sheet and the arranged into a spiral wound module, then inserted into steel pressure-containing tubes. Such a membrane module has been applied in a number of plants in U.S.A. and Colombia.

Figure 4 - ProSep membrane skid for CO2 removal installed in Texas.

The polymeric materials lead to a good separation performance but are poisoned by aromatics, organic liquid and water. For this reason, pre-treatment units have to be installed before the membrane separation device, leading to an increase of the costs and of the plant complexity.Some innovative membrane technologies have been developed and installed. As an example, the CO2 separation membrane provided by Membrane Technology & Research (MTR)[11] is a new polymeric membrane able to withstand the various components of the NG mixture, thus reducing the impacts of the pre-treatments.

3.Membrane application for H2S removal

On the contrary of the CO2 removal process by means of membranes, which now sees many industrial applications, the removal of H2S is still in a phase of pre-industrial development. The most interesting technologies are developed and tested by Membrane Technology & Research.MTR develops the SourSep™ systems bulk removal of H2S from pressurized sour gas[12]. The proposed architecture is based on a simple single stage process, able, thanks to a proper membrane installation, to assure a bulk removal of H2S (>75%). The permeate stream generated in very sour and can be re-injected in the extraction well or processed in a conventional Claus unit. The retentate stream has to be fed to other H2S removal unit (amine absorption or a scavenger process) to further reduce the sulfur content. Figure 4 shows a SourSep™ installation.

Figure 5 - SourSep™ MTR installation for the removal of H2S.

Another membrane application for H2S removal, always proposed by MTR, is for the achievement of the stringent H2S content target (< 40 ppm) if the NG is fed to an engine or a gas turbine. Such low composition is required to avoid the mechanical components corrosion and damage. A scheme of such a process, proposed by MTR, is illustrated in Figure 6: after the NG compression, a raw gas stream is sent to a first filter and then to the membrane unit, able, also thanks to the high pressure and, consequently, to the large pressure driving force through the membrane, to drastically reduce the sulfur content.

Figure 6 - Process scheme for H2S removal from NG to reach the inlet feedstock quality targets for engines or gas turbines[13].

Also UOP has developed and applied a polymeric membrane for the removal of H2S, testing it in a pilot plant and thus demonstrating the membrane stability at a wide operating conditions range and the proper values of permeability and selectivity.

4.Membrane application for N2 removal

Selective membranes are proposed also for NG denitrogenation but, according to the DOE[14], the challenge of developing competitive membrane for N2/CH4 separation is not yet overcome.Both glassy polymers (nitrogen-permeable) or rubbery polymers (methane-permeable) membranes can be applied. But, while a nitrogen/methane selectivity of 15 at least is required to make the denitrogenation membrane economically competitive, the highest selectivity available with current polymers is only about 2-3. Therefore, strong R&D efforts are required.Some interesting studies can be found in the scientific literature, as the works published by the University of Massachusetts[15] and the Aachen University[16].Currently, MTR and CB&I are the only manufacturers of membranes for nitrogen removal. The membrane module they developed, called NitroSepTM, have been applied for up to 20 MMSCFD NG plants and nitrogen composition up to 15%[17] (refer to Figure 7).

Figure 7 - NitroSepTM module application in California[18].  
[1] Biruh Shimekit and Hilmi Mukhtar, " Natural Gas Purification Technologies – Major Advances for CO2 Separation and Future Directions", in Hamid Al-Megren "Advances in Natural Gas Technology", InTech edition, 2012, p.235-270.
[3] Cavenati, S., A. Carlos, et al. (2006). Removal of carbon dioxide from natural gas by vacuum pressure swing adsorption. Energy & fuels, Vol. 20, No. 6, pp. 2648-2659.
[4] Stern, A. (1994). Polymers for gas separations: the next decade. Journal of Membrane Science, Vol. 94, No. 1, pp. 1-65
[13] Pat Hale, Kaaeid Lokhandwala, "Advances in Membrane Materials Provide New Solutions in the Gas Business".
[15] ttp://

Pinch Analysis in the Oil&Gas Industries

Author: Mauro Capocelli, Researcher, University UCBM – Rome (Italy)

1. Theme description

Energy recovery and process integration is the most direct solution if for the increasing of process efficiency. In the industrial processes (in in particular in the chemical and petrochemical sector) the performance improvement is mandatory in order to face the climate change as well as the growing energy crisis. This objective can be achieved by integrating systems for the simultaneous minimization of the objective functions: the investment cost and the energy consumption.


Figure  1 -  Composite curves and Grand composite curve example3

By analysing the heat transfer, the optimal system is the one that balances the two abovementioned functions by identifying the most convenient way to transfer the heat between various fluids in the overall system (in a way compatible with the process control constraints, the need of spaces and the safety risks)[1]. This aspect is complex by its nature since the chances of interconnection in a plant configuration vary with the operating conditions. A systematic approach to this issue is given by Nishida and co-workers[2] that individuated from the theoretical point of view the two main areas of the process integration: the identification of the different possible alternatives and the development of heuristic criteria to discard the worst solutions. The Pinch Analysis (PA) was born from these necessities by some academic works as the one developed in the ETH of Zurich and Leeds University in the 70s[3]. The first systemic essay on the pinch technology was given by Linnhoff[4]. He applied thermodynamic fundamentals for improving the process efficiency, saving energy, reducing the investment cost and optimizing the process control. By analysing the heat flow cascade, Linnhoff defined the pinch point as the temperature level corresponding to a zero heat flux between the hot and cold fluid (Fig. 1) and proposed the graphical approach based on the Grand Composite Curve in order to simply evaluate the pinch and the energy target[5]. His works become the main textbooks on pinch analysis. He also established the Linnhoff March Ltd in 1983 offering process design services to international clients; in 90’ around 80% of all the world's largest oil and petrochemical companies become its clients or sponsors. The expanded edition of 2006, "Pinch Analysis and Process Integration" is the fundamental book of modern PA[6]. These methods are now recognized also as fundamental for pollution prevention in the view of reusing and reducing the resources as well as optimizing end-of-pipe treatment and disposal[7].


2.Theory and Practice

Intuitively, the main field of application of PA is the optimization of Heat Exchanger Networks (HEN) present in complex systems. The concept, based on the thermodynamic analysis, do not use advanced unit operations for the performancefig 1 bis improvement, but has the aim to match the cold and hot process streams with a HEN that minimize the external energy supply. According to the PA fundamentals, the first step is to draw the heating and cooling curves to evaluate the minimum temperature difference ΔTmin and the related energy target corresponding to reasonable values of the temperature differences.

The interval temperatures are used to compose a Grand Composite Curve (GCC) that gives the overall process overview in the temperature and heat flow diagram (Fig. 1). The smaller  ∆Tmin the  more  heat  can  be transferred in the heat exchanger, but this will also lead to larger heat exchanger area which is costly. Hence, choosing an optimal ∆Tmin is possible only by integrating economic considerations.

The diagram is commonly divided into two sub problems defined by the pinch points (i.e. the constrained regions in which there is the minimum temperature difference between the streams). This approach has two main corollaries: do not transfer heat across the pinch; do not use external cooling above the pinch and external heating below the pinch[8] (as visible in Fig. 2).

Figure 2 – The separation of HEN and the main corollary of PA

Globally, the application of Pinch analysis in the process industry is necessary for large, complex industrial facilities, where systematic methods are needed to identify the best opportunities to improve energy efficiency. The typical PA project is based on the fundamental stage of data acquisition (primarily heat loads and temperatures and economic parameters) regarding the process under consideration. Then, the analysis can be directed to:

  • Selecting the best option for reducing the inefficiencies from an economic point of view (e.g. heat transfer units in distillation plants)
  • Generating targets for each utility for high energy efficiency and low emissions (for design and retrofit)
  • Debottlenecking and optimizing the integration of utilities in retrofit design
  • Managing material resources, such as water and hydrogen ("water pinch" and "hydrogen pich") to minimize the makeup and the discharge (while maximizing the reuse).

This latest aspect can be characterized as the Mass Pinch analysis, developed by Mahmoud M. El-Halwagi, and Vasilios Manousiouthakis[9], consisting in a thermodynamic procedure used to identify the bottlenecks that limit the extent of mass exchange between the rich and the lean process streams (in order to improve the design and minimizing the cost).

mass pinch diagram
Figure 3 - Mass pinch diagram [El-Halwagi, 1998]


Since the Oil & Gas Sector is one of the major energy user and supplier, is highly integrated from the point of view of heating and cooling power and it is, therefore, the optimal candidate for PA.

The group of the Politecnico of Milano developed many strategies based on the PA for the optimal design of steam generators, boilers and heat recovery steam cycles. Their “HRSC Optimizer” has been applied with interesting results on Fischer Tropsch (FT) synthesis processes[10]  (with high recovery of the unconverted gases) as well as integrated gasification combined cycle (IGCC-CCS). Joe and Rabiu improved the existing HEN of a Petroleum refining section revealing a 34% of energy saving by the definition of the optimal utility usage, number and surfaces of the exchangers[11]. Yoon et al., suggested the retrofit of a ethyl benzene plant by PA with a payback time of less than one year and by reducing the opex of more than 5%[12]. The application of PA in the retrofit design of the Tula distillation units is described by Briones in 1999 on the Oil&Gas Journal. The reduction of the fuel consumption by more than 40% (8 M$/year) with a payback period of less than 2 years are among the main claimed results[13]. An integrated design of the atmospheric and vacuum distillation units exploited opportunities for heat recovery and removed inefficiencies such as the use of stripping steam instead of reboilers, the use of heat sources (for example, vacuum residue and pump-arounds), the cogeneration in the steam and power plant.

A. Posada and V. Manousiouthakis[14] studied the methane reforming based hydrogen production plant with the purpose of finding minimum utility cost (hot, cold and electricity). Keshavarzian et al., described the PA of the para-xylene separation unit of Borzouyeh Petrochemical Company[15]. Rossiter reported a detailed example of PA in crude distillation unit. After data acquisition, individuation of the energy target and the major inefficiencies, he individuated the main opportunities for retrofit desig: i) to rearrange existing heat exchangers to increase feed preheating and/or steam generation; ii) to add heat transfer area to existing matches between hot and cold streams; iii) to add new exchanger to introduce new matches between the streams8. His retrofit design reached the recovery of 45% of the energy target (14 MBtu/h in the crude preheating and 12.2 MBtu/h for steam generation at 120 psig) with a net saving of more than 2.5 M$ and a payback period of about 3 years. Shahani et al.[16] have suggested alternative design of hydrogen plants seen as a source of steam from waste heat recovery (apart from the primary purpose of producing hydrogen) because of the potentiality of the steam reforming to produce steam more efficiently than a conventional boiler. Further industrial case studies are reported on the IPIECA website[17].

For very large problems such as refining industries, mass and energy integration is necessary for reaching the best economic option. In similarity with the Heat exchange network, any synthesis process can be seen as the interconnection of different Mass Exchangers[18].Fig.3

This broader vision derives from the concept of seeing a process as a converter of energy (degradation) and matter (separation). This systemic approach is typical of chemical engineering and process engineering that see any complex system as an integration of unit processes. This representation has been intuitively depicted by T. Gundersen in 2013[19] at the International Process Integration Jubilee Conference.

Examples of water and hydrogen PA in the oil & gas sector can be found[20] for the Energy Recovery at a Fluid Catalytic Cracking (FCC) Unit[21]. Rajesh et al.[22] have presented an integrated approach to obtain possible sets of steady state operating conditions for improved performance of an existing plant, using an adaptation of a genetic algorithm that seeks simultaneous maximization of product hydrogen and export steam flow rates. The hydrogen PA in a petroleum refinery has been presented by M.K. Oduola and T.B. Oguntola that evaluated that  the  hydrogen margin between source and sink units has drastically reduced to about 17kNm3/h (~ 63% of reduction).[23] Nelson and Liu[24] created an automated pinch spreadsheet for the quick evaluation of hydrogen excess and the possible saving in the networks through the evaluation of sources and sinks by the Property Cascade Analysis (PCA) to establish the resource targets within a property integration framework. The fundamentals and the mathematical algorithms for wastewater minimization by PA can be found in the work of Wang and Smith[25].

Nevertheless, it is important to note that, if not bound properly and conducted by expert evaluators, the pinch analysis can lead to risky solutions or, simply, virtual solutions being not compatible with the system in which fall. The design must be therefore in depth examined by external expert auditors (in particular through the hazard analysis).

[1] Francesco G. Giacobbe 1986. Introduzione alla Pinch Technology. Le Pleiadi Editrice s.n.c.
[2] Nishida N., Stephanopoulos, G., Westerberg, A.W., 1981, A review of Process Synthesis, AichE J., 27.
[3] 1. Kemp, I.C., 2006. Pinch Analysis and Process Integration: A User Guide on Process Integration for the Efficient Use of Energy (Second Edition). Butterworth-Heinemann (Elsevier).
[4] AIChE Journal Vol 24, Issue 4, July 1978, Pages: 633–642, Bodo Linnhoff and John R. Flower "Synthesis of heat exchanger networks: I. Systematic generation of energy optimal networks"
[5] IChemE User Guide on Process Integration for the Efficient Use of Energy, 1st edition, in 1982
[6] Kemp, I.C. (2006). Pinch Analysis and Process Integration: A User Guide on Process Integration for the Efficient Use of Energy, 2nd edition. Includes spreadsheet software. Butterworth-Heinemann. ISBN 0-7506-8260-4.
[7] Pollution Prevention through Process Integration: Systematic Design Tools Di Mahmoud M. El-Halwagi. Pollution Prevention through process integration. Acedemic Press, San Diego 1997.
[8] A.P. Rossiter. Improve Energy Efficiency via Heat Integration 2010. Heat Transfer AichE.
[9] M.M. El-Halwagi and V. Manousiouthakis, Synthesis of mass exchange networks Aiche J. Volume 35, Issue 8 August 1989 Pages 1233–1244
[10] Martelli et al., 2012; Design criteria and optimization of heat recovery steam cycles for high efficiency, coal-fired, Fisher-Tropsch Plants. Proceedings of ASME Turbo Expo 2012; 2012, Copenhagen, Denmark
[11] John M. Joe, Ademola M. Rabiu. Retrofit of the Heat Recovery System of a Petroleum Refinery Using Pinch Analysis. Journal of Power and Energy Engineering, 2013, 1, 47-52.
[12] Yoon S-G., Lee, J., Park, S., Heat integration analysis for an industrial ethylbenzene plant using pinch analysis. Applied Thermal Engineering 2007; 27:886-93.
[13] Victor Briones, Ana L. Pérez, Luz M. Chávez, Rubén Mancilla, Marisol Garfias, Rodolfo Del Rosal, Nancy Ramírez Pinch analysis used in retrofit design of distillation units, 1999.
[14] A. Posada and V. Manousiouthakis. Heat and Power Integration Opportunities in Methane Reforming based Hydrogen Production with PSA separation
[15] S. Keshavarzian, V. Verda, E. Colombo, P. Razmjoo. Fuel saving due to pinch analysis and heat recovery in a petrochemical company
[16] Shahani G.H, Garodz LJ, Murphy KJ, Baade WF, Sharma P. Hydrogen and utility supply optimization. Hydrocarbon Processing. 1998;77(9):143-148.
[17]Pinch Analysis IPIECA
[18] G.W. Garrison, B.L. Cooley, M.M. El Halwagi. Synthesis of Mass-Exchange Networks with Multiple Target Mass-Separating  Agents.
[19] T. Gundersen 2013. What is Process Integration? International Process Integration Jubilee Conference
[20] April M. Nelson and Y  A. Liu . Hydrogen-Pinch Analysis  Made  lasu An  automated  spreadsheet  method  can  quickly  help  minimize fresh  hydrogen  consumption  while  maximizing  hydrogen  recovery and  reuse in  petroleum  refineries  and  petrochemical  complex CHEI\4ICAL  ENGINEERING  WWW.CHE,COM  JUNE  2OO
[21] Natural Resourches Canada:  PINCH ANALYSIS: For the Efficient Use of Energy, Water & Hydrogen Her Majesty the Queen in Right of Canada, 2012
[22] Rajesh JK, Gupta SK, Rangaiah GP, Ray AK. Multi-objective optimization of steam reformer performance using genetic algorithm. Industrial and Engineering Chemistry Research 2000;39:706-717.
Rajesh JK, Gupta SK, Rangaiah GP, Ray AK. Multi-objective optimization of industrial hydrogen plants. Chemical Engineering Science 2001;56:999-1010.
[23] M. K. Oduola T. B. Oguntola Hydrogen pinch analysis of a petroleum refinery as an energy management strategy
[24] Nelson Liu Virginia Poytech Institute
[25] Y.P. Wang, R. Smith, 1994. Wastewater Minimization. Chemical  Engineering  Science  49 pp.  981-1006.

Bioremediation of Hydrocarbon Contaminated Soil Using Selected Organic Wastes

 Author: Vincenzo Piemonte, Associate Professor, University UCBM – Rome (Italy)

1.Theme description

The large increase in the past century of industrial development, population growth and urbanization favoured the release of hazardous chemicals in the environment and a general global pollution. Several chemicals, including heavy metals and radionuclides, but also organic compounds such as pesticides, dyes, Polycyclic Aromatic Hydrocarbons (PAHs), may persistently accumulate in soils and sediments, thus potentially menacing human health and environment quality, due to their carcinogenic and mutagenic effects, and ability to bioconcentrate throughout the trophic chain[1].

The concern on toxicity risk and environmental pollution associated with chemical contaminants has called for the development and application of remediation techniques. In fact, a large effort has been devoted to find ways to remove contaminants from ecosystems. In particular, several strategies were devised to remediate and restored polluted soils, based on physical, chemical and biological methods. These techniques may be applied in situ, i.e. in the very contaminated soil, thus offering numerous advantages over ex situ technologies, whereby the soil is removed to be treated elsewhere. Thus, in situ remediation techniques do not require soil transportation costs and can be applied to diluted and widely diffused contaminations, thus minimizing dangerous intensive environmental manipulation. Conversely, ex situ processes imply the excavation of polluted soil and their decontamination to be conducted in a separate processing plant[2]. Table 1 summarizes the main technologies for cleaning up polluted soils and the estimated costs for each treatment.

Depending on contaminants characteristics and soil properties, different soil remediation technologies can be applied with variable success. However, effective eco-friendly biological, physical and chemical remediation practices are being today preferred over the techniques which imply larger biotic and abiotic environmental impacts.

Table 1. Main technologies for cleaning up of polluted soils and the estimated costs of each treatment.
Treatment Approximate remediation cost (£/tonne)
Removal to landfill Up to 100
Cement and Pozzolan based 25-175
Lime based 25-50
Vitrification 50-525
Physical processes
Soil washing 25-150
Physico-chemical washing 50-175
Vapour extraction 75
Chemical Processes
Solvent extraction 50-600
Chemical dehalogenation 175-450
In situ flushing 25-80
Surface amendments 10-25
Thermal treatment
Thermal desorption 25-225
Incineration 50-1200
Biological treatment  
Windrow turning 10-50
Land farming 10-90
Bioventing 15-75
Bioslurry 50-85
Biopiles 15-35
In situ bioremediation 175

2.Bioremediation Methods

Bioremediation, either as a spontaneous or as a managed strategy, involves the application of biological agents to clean-up environmental compartments polluted by hazardous chemicals. Plants, microorganisms and plant-microorganism associations, either naturally occurring or tailor-made for the specific purpose, represent the main bioremediation active factors.


Figure 1- Bioremediation scheme.

2.1 Microorganisms

In contaminated soils, aromatic Anthropogenic Organic Pollutants (AOPs) can be degraded by bacteria or fungi via an aerobic or anaerobic metabolism or both. In aerobic metabolism, molecular oxygen is incorporated into the aromatic ring prior to dehydrogenation and subsequent aromatic ring cleavage. In anaerobic metabolic processes molecular oxygen is absent, and alternative electron acceptors, such as nitrate, ferrous iron, and sulfate, are necessary to oxidize aromatic compounds.

The effective agents in the transformation of organic pollutants are the microbial enzymatic system that, as powerful catalysts, extensively modify the structure and toxicological properties of contaminants or completely mineralize the organic molecule into innocuous inorganic end products. However, in order to be biodegraded, contaminants must interact with the enzymatic system within the biodegrading organisms. If soluble, they can easily enter cells, but, if insoluble, they must be transformed into soluble or more easily cell-available products.

Their main sources of these enzymes are fungi, such as wood-degrading basidiomycetes, terricolous basidiomycetes, ectomycorrizal fungi, soil-borne microfungi, and actinomycetes. Most fungi are robust organisms and may tolerate larger concentrations of pollutants than bacteria. In particular, white-rot fungi appear unique and attractive organisms for the bioremediation of polluted sites. A possible alternative to the bioremediation of polluted sites by microbial activity may be the direct application of cell-free enzymes after their isolation from microbial cultures.

Bioremediation of contaminants can be more rapidly accomplished by two methods, bioaugmentation and/or biostimulation[3]. The process of bioaugmentation, as it applies to remediation of petroleum hydrocarbon contaminated soils, involves the introduction in a contaminated system of microorganisms that have been exogenously cultured with the aim to degrade specific chains of hydrocarbons. These microbial cultures may be derived from the very same contaminated soil or obtained from a stock of microbes that have been previously proven to degrade hydrocarbons. On the other hand, the biostimulation process implies the addition to polluted soils of nutrients in the form of organic and/or inorganic fertilizers, in order to stimulate the activity and proliferation of indigenous microbes. These may or may not be proved to aim the polluting hydrocarbons as a primary food source. However, the hydrocarbons are assumed to be degraded more rapidly in comparison to natural attenuation processes, probably because of the increased number of microorganisms induced by the greater amount of nutrients provided to the contaminated soil.

2.2 Plants

Phytoremediation of organic and inorganic contaminants involves either a physical removal of pollutants or their bioconversion (biodegradation or biotransformation) into biologically inactive forms. The conversion of metals into inactive forms can be enhanced by external conditioning of soils: enhancement of soil pH (e.g. through liming), addition of organic matter (e.g. sewage sludge, compost etc.), inorganic anions (e.g. phosphates) and metal oxides and hydroxides (e.g. iron oxides). Concomitantly, plants can play a role here in transforming contaminants in inactive forms by releasing different anionic species in soil and altering soil redox conditions[4].

The uptake of AOPs by plants occurs through two pathways. One pathway is the soil-water-plant cycle in which pollutants are uptaken from the soil solution and then transported up plant shoots within the xylem transpiration system. A second pathway involves the soil-air-plant cycle, in which AOPs are uptaken by aerial parts of plants either from soil particles adsorbed on plant leaves or directly as gaseous forms of AOPs after their volatilization from soil. Following plant uptake, AOPs are further translocated, sequestered, and degraded in plant tissues by other processes. The key parameters which influence the translocation of contaminants from soil to plant include the content of contaminants in soil (or water), their physical-chemical properties, the plant species, the soil types, and the time of exposure to plant[5].

The advantages of phytoremediation over other approaches is due to the inherent preservation of soil natural structure and to the free sunlight energy involved in the process, that enhances the content of degrading microbial biomass in soil.

2.3 Compost and Biochar

The composting process is the biological decomposition of organic wastes under controlled aerobic conditions. In contrast to uncontrolled natural decomposition of organic compounds, the temperature in composting waste heaps can increase by self heating to the ranges which are typical of mesophilic (25-40 °C) and termophilic microorganisms (50-70 °C). The end product of composting is a biologically stable humus-like product that can be employed in several applications, e.g.: soil conditioner, fertilizer, biofiltering material, fuel. The composting process can concomitantly reach different objectives, such as the volume and mass reduction of biomasses, their stabilization and drying, and the elimination of phytotoxic substances and pathogens[6].

Composting is also a method to be employed in the decontamination of polluted soils, because compost is capable of sustaining various microbial populations potentially hydrocarbons’ degraders, such as bacteria, including bacilli, pseudomonas, mesophilic and thermophilic actinomycetes, and lignin-degrading fungi. Compost can also improve the chemical and physical properties of soil to be decontaminated, since it affects soil pH, nutrients and moisture content, soil structure, and microbial biomass population.

Unless coupled with more bioactive compost materials, the possible use of biochar in the remediation of contaminated soil appears limited by its inherent biological recalcitrance that depresses the activity of pollutants microbial degraders[7].

3.Case Study: Bioremediation by selected organic wastes

Inadequate mineral nutrient, especially nitrogen, and phosphorus, often limits the growth of hydrocarbon utilizing bacteria in water and soil. Addition of nitrogen and phosphorus to an oil polluted soil has been shown to accelerate the biodegradation of the petroleum in soil. It was reported that 18.7% and 31.2% higher crude oil biodegradation in soil amended with chicken droppings and fertilizer, respectively, compared to un-amended control soil after 10 weeks while degradation of crude oil in soil amended with melon shells as source of nutrients was 30% higher than those of un-amended polluted soil after 28 days[8].

Addition of a carbon source as a nutrient in contaminated soil is known to enhance the rate of pollutant degradation by stimulating the growth of microorganisms responsible for biodegradation of the pollutant.

It has been suggested that the addition of carbon in the form of pyruvate stimulates the microbial growth and enhances the rate of Polyciclic Aromatic Hydrocarbons (PAHs) degradation. Mushroom compost and spent mushroom compost (SMC) are also applied in treating organo-pollutant contaminated sites. Addition of SMC results in enhanced PAH-degrading efficiency (82%) as compared to the removal by sorption on immobilized SMC (46%). It is observed that the addition of SMC to the contaminated medium reduced the toxicity, added enzymes, microorganisms, and nutrients for the microorganisms involved in degradation of PAHs[9].

Therefore, utilization of organic waste in the bioremediation of soil seems a highly potential area. This will reduce the amount of organic waste sent to landfill, thus reduce the emission of landfill gases and also provide a cheap source of organic additive for the remediation purpose.

Figure 2 - Percentage biodegradation of petroleum hydrocarbon in soil contaminated with used lubricating oil and amended with organic wastes.

Figure 3 shows the biodegradation of a lubricating oil in soil (throughout the period of 98 days) are reported in Agamuthu et al. 2013[10]. The results showed high biodegradation of used lubricating oil at the end of 98 days with soil amended with organic wastes compared to the control soil treatment. At the end of 98 days, used lubricating oil contaminated soil amended with cow dung showed the highest percentage of oil biodegradation with 94%, followed by soil amended with sewage sludge which is 82% compared to the un-amended control soil that showed 66% of biodegradation of oil at the end of 98 days. Used lubricating oil contaminated soil amended with organic wastes have greater oil biodegradability compared to un-amended control soil in this study.

The main difference of oil biodegradation between the soil amended with organic wastes and unamended soil treatment occurred during the 14-28 days, where biostimulation resulted in significant increase of oil biodegradation. The addition of nutrients stimulates the degradative capabilities of the indigenous microorganisms thus allowing the microorganisms to break down the organic pollutants at a faster rate.

In conclusion, bioremediation can be a viable and effective response to soil contamination with petroleum hydrocarbons and can be positively enhanced by the use of organic wastes.

[1] K.T. Semple, B.J. Reid, T.R. Fermor, Impact of composting strategies on the treatments of soils contaminated with organic pollutants, Environ. Pollut., 112 269-283 (2001).
[2] T. Iwamoto and M. Nasu, Current bioremediation practice and perspective, J. Biosci. Bioeng. 92, 1-8 (2001).
[3] C.J. Cunningham and J.C. Philp, Comparison of bioaugmentation and biostimulation in ex situ treatment of diesel contaminated soil, Land Contamination & Reclamation, 8, 261-269 (2000).
[4] J. Peng, Y. Song, P. Yuan, X. Cui, G. Qiu, The remediation of heavy metals contaminated sediment, J. Hazard. Mater., 161, 633-640 (2009).
[5] C.T. Chiou, Partition and adsorption of organic contaminants in environmental systems, John Wiley & Sons, New York (2002).
[6] E. Mena, A. Garrido, T. Hernández, C. García, Bioremediation of sewage sludge by composting, Commun. Soil Sci. Plan., 34, 957-971 (2003).
[7] L. Beesley, E. Moreno-Jiménez, J.L. Gomez-Eyles, E. Harris, B. Robinson, T. Sizmur, A review of biochars' potential role in the remediation, revegetation and restoration of contaminated soils, Environ. Pollut., 159, 3269-3282 (2011).
[8] Abioye OP, Abdul Aziz A, Agamuthu P. Stimulated biodegradation of used lubricating oil in soil using organic wastes. Malaysian Journal of Science, 2009; 28(2):127-133.
[9] Lau KL, Tsang YY, Chiu SW. Use of spent mushroom compost to bioremediate PAH-contaminated samples. Chemosphere, 2003; 52(9): 1539–1546.
[10] P. Agamuthu, Y.S. Tan, S.H. Fauziah, Bioremediation of hydrocarbon contaminated soil using selected organic wastes, Procedia Environmental Sciences 18 ( 2013 ) 694 – 702

Life Cycle Assessment (LCA)

Author: Vincenzo Piemonte, Associate Professor, University UCBM – Rome (Italy)

1.Theme description

The Life Cycle Assessment (LCA) allows to evaluate the interactions that a product or service has with the environment, considering its whole life cycle that includes the preproduction points (extraction and production of raw materials), production, distribution, use (including reuse and maintenance), recycling, and final disposal. So the objectives of the LCA are to evaluate the effects of the interactions between a product and the environment, and therefore the environmental impacts directly or indirectly caused by the use of a given product.

fig 1
Figure 1 -Example of a product System for LCA

LCA can be conducted by assessing the environmental footprint of a product from raw materials to production (Cradle to gate), or to be extended to the whole product life cycle, including its disposal (Cradle to grave ).  If the analysis is performed directly on the categories of environmental impact, such methodology is called "Mid-point approach". A viable and valid alternative is represented by the “End-point approach “ or " Damage-oriented approach"

fig 2

Figure 2 - LCA structure
  According to ISO 14040[1] and 14044[2], the LCA is achieved through four distinct phases:
  • Goal and Scope.
  • Life Cycle Inventory (LCI).
  • Life Cycle Impact Assessment (LCIA)
  • Interpretation (normalization and weighting)

2.LCA Phases

In the first phase, the goal and scope of study are formulated and specified in relation to the intended application. The object of study is described in terms of a socalled functional unit. Apart from describing the functional unit, the goal and scope should address the overall approach used to establish the system boundaries. The system boundary determines which unit processes are included in the LCA and must reflect the goal of the study.

The second phase ‘‘Inventory’’ involves data collection and modeling of the product system as well as description and verification of data. This phase encompasses all data related to environmental (e.g., CO2) and technical (e.g., intermediate chemicals) quantities for all relevant unit processes within the study boundaries that compose the product system. The data must be related to the functional unit defined in the goal and scope phase. The results of the inventory are a life cycle inventory (LCI), which provides information about all inputs and outputs in the form of elementary fluxes between the environment and all the unit processes involved in the study.

The third phase ‘‘Life Cycle Impact Assessment (LCIA)’’ is aimed to evaluate the contribution to impact categories such as global warming and acidification. The first step is termed characterization. Here, impact potentials are calculated based on the LCI results. The next steps are normalization and weighting, but these are both voluntary according the ISO standard. Normalization provides a basis for comparing different types of environmental impact categories (all impacts get the same unit). Weighting implies assigning a weighting factor to each impact category depending on the relative importance.

Issues such as choice, modelling and evaluation of impact categories can introduce subjectivity into the LCIA phase. Therefore, transparency is critical to the impact assessment to ensure that assumptions are clearly described and reported.

fig 3
Figure 3 - Stages of an LCA

The LCIA addresses only the environmental issues that are specified in the goal and scope. Therefore, LCIA is not a complete assessment of all environmental issues of the product system under study. LCIA cannot always demonstrate significant differences between impact categories and the related indicator results of alternative product systems. This may be due to

  • limited development of the characterization models, sensitivity analysis and uncertainty analysis for the LCIA phase,
  • limitations of the LCI phase, such as setting the system boundary, that do not encompass all possible unit processes for a product system or do not include all inputs and outputs of every unit process, since there are cut-offs and data gaps,
  • limitations of the LCI phase, such as inadequate LCI data quality which may, for instance, be caused by uncertainties or differences in allocation and aggregation procedures, and
  • limitations in the collection of inventory data appropriate and representative for each impact category.

The last phase, named ‘‘interpretation,’’ is an analysis of the major contributions, sensitivity analysis, and uncertainty analysis. This stage leads to the conclusion whether the ambitions from the goal and scope can be met.

The interpretation should reflect the fact that the LCIA results are based on a relative approach, that they indicate potential environmental effects, and that they do not predict actual impacts on category endpoints, the exceeding of thresholds or safety margins or risks. The findings of this interpretation may take the form of conclusions and recommendations to decision-makers, consistent with the goal and scope of the study.

Life cycle interpretation is also intended to provide a readily understandable, complete and consistent presentation of the results of an LCA, in accordance with the goal and scope definition of the study.

The interpretation phase may involve the iterative process of reviewing and revising the scope of the LCA, as well as the nature and quality of the data collected in a way which is consistent with the defined goal.

The findings of the life cycle interpretation should reflect the results of the evaluation element.


3.LCA Methods and Softwares

The LCA analysis can be performed by using softwares (the most important and used are SimaPro[3], Boustead[4], Gabi[5]) which implements several LCA methodologies. Among these, the most used methods at mid point level are:

  • CML 2001[6] that computes 10 impact categories (Abiotic Depletion, Acidification, Eutrophication Climate change - GWP100, Ozone Layer Depletion, Human Toxicity, Freshwater Ecotoxicity, Marine Ecotoxicity, Terrestrial Ecotoxicity, Photochemical Oxidation);
  • Cumulative Energy Demand (CED)[7], generally used for the evaluation of the primary energy savings, which accounts for 6 impact categories (Non renewable, fossil; Non renewable, nuclear; Renewable, biomass; Renewable, wind, solar, geothermal; Renewable, water)
  • Intergovernmental Panel on Climate Change (IPCC)[8] is used for the assessment of the Global Warming and is a typical single issue methodology.

As for the methods at end-point level (or damage level), one of the most interesting is the Eco-indicator 99[9]. This approach deals with 11 mid-point impact categories (Carcinogenesis, Respiratory Organics, Respiratoty Inorganics, Climate Change, Radiation, Ozone Layer, Ecotoxicity, Acidification/Eutrophication, Land Use, Minerals, Fossil Fuels) further aggregated into representative macro-categories of overall damage: Human Health, Ecosystem Quality and Resources. The impact categories from carcinogens to ozone layer are then normalized and grouped in the macrocategory (end-point level or damage level) ‘‘Human Health’’ that takes in to account the overall impact (damage) of the emissions associated to the product analyzed on the human health. The categories ecotoxicity, acidification/eutrophication, and land use are included in the macrocategory ‘‘Ecosystem Quality’’ that accounts for the overall damage on the environment, while the ‘‘minerals and fossil fuels’’ are grouped in the macrocategory ‘‘Resources’’ that accounts for the depletion of non renewable resources. The impact category indicator results that are calculated in the characterization step are directly added to form damage categories. Addition without weighting is justified, because all impact categories that refer to the same damage type (like damage to the Ecosystem Quality) have the same unit (for instance, PDF*m2yr; PDF, potentially disappeared fraction of plant species). This procedure can also be interpreted as grouping. The damage categories (and not the impact categories) are then normalized on an European level (damage caused by 1 European per year), mostly based on 1993 as base year, with some updates for the most important emissions.


4.Case Study: Produced Water Treatment

Due to its complex and polluting composition, norms regarding the discharge of produced water into the environment have gradually become more and more limiting and strict. The costs of appropriate produced water treatments amount to about 40 billion dollars per year and they weigh clearly on the price of final products. For this reason, it is necessary that the water can be reused after being treated, this is especially true in arid places where water is a valuable and precious asset. The aim of this case study is to highlight the importance of treating the produced water, and understand their environmental importance. The assessment includes the entire life cycle of the process: the extraction and processing of raw materials, manufacturing, transportation, distribution, use, reuse, recycling and disposal.

the LCA method is applied to the most important produced water treatments, by using as process simulator Gabi 6. The analysis and the comparison have been made in for the two cases:

  1. Reinjection + Primary treatments (see figure 4);
  2. Reinjection + All treatments (including secondary and tertiary treatments) (see figure 5);

 fig 4

Figure 4 - Reinjection + Primary Treatments

fig 5

Figure 5 -Reinjection + All Treatments
  fig 6
Figure 6 - LCA resut Comparison

Primary treatments accounts mainly of physical treatments aimed to the removal of suspended oil, while secondary treatments are focused on the removal f dissolved organic compoundes (mainly BTEX). The application of tertiary treatments (membranes) is necessary to make the produced water suitable not only for the disposal but to be used in civil and industrial fields. In this way it can represents a resource with economic value, rather than an oil extraction waste.

Figure 6 reports the LCA results comparison for the two systems under analysis in terms of three important impact categories of mid point level, which accounts for the global waming, the ecotoxicity and human health. As it can be see from the figure the presence of secondary and tertiary treatments strongly reduces the impact on ecotoxicity and human health, while the global warming effect is higher than that of system 1 (only primary systems) mainly due to incidence of GHG gases produced during the secondary and tertiary treatment processes.

[1] ISO 14040:2006, Environmental management—Life cycle assessment—Principles and framework.
[2] ISO 14040:2006, Environmental management—Life cycle assessment—Requirements and guidelines.

LNG R&D for the Liquefaction and Regasification Processes

 Author: Marcello De Falco, Associate Professor, University UCBM – Rome (Italy)

1.Theme description

Liquefied Natural Gas (LNG) is used for transporting natural gas (NG) to distant markets, not supplied by NG grid connecting the extraction/production point to the users.

Basically the LNG process is composed by the following steps[1]:

  • Extracted natural gas is liquefied in the production field or in a close site, after removing the impurities. Usually, in the liquefaction process the gas is cooled to a temperature of approximately -162°C at ambient pressure.
  • Then, the LNG is loaded onto double-hulled ships which are used for both safety and insulating purposes and transported to the receiving harbor.
  • As arrived, the LNG is loaded into well-insulated tanks and, then, re-gasified in specific plants.
  • At the end, the re-gasified NG is fed to the pipeline distribution system and delivered to the end-users.

But, the high production, transportation and storage costs have reduced the LNG technology spread to specific cases in which there are not other cheaper ways to transport the NG.

But, the market and political issues related to the NG are increasing the interest on this alternative transportation technology, which has the benefits of enlarging the potential markets for sellers and the potential suppliers for the buyers (refer to Figure 1). The growing interest has led to greater and greater investments on LNG Research & Development and on its applications.


global LNG

Fig. 1 – Global LNG demand[2]

In the following, some of the technologies and innovations related to the LNG production, the transportation and the regasification fields are reported and assessed.

2.NG Liquefaction processes

A liquefied natural gas plant (LNG plant) is usually divided into four steps[3]:
  1. pretreatments;
  2. acid gas removal;
  3. dehydration;
  4. liquefaction

The pretreatment unit, where the undesired substances are removed, is the same used in the conventional production/distribution process and is composed by separations units and a slug catcher able to separate the gas from oil and water phases.

Then, the NG is purified from acid gases as hydrogen sulfide (H2S) and carbon dioxide (CO2) by means of absorption/adsorption processes. Also in this step, conventional technologies are used.

In the step 3, an adsorbent is used to remove water from the natural gas from which impure substances have been removed. By this way, ice will not form during the subsequent step.

Then, the NG is ready to be liquefied in the core unit of the process, the liquefaction unit, in which the NG is cooled down and liquefied at –160°C or less. Because of the extremely low operating temperatures need, the liquefaction process requires a enormous amount of energy, usually supplied burning a share of the NG feedstock. The R&D efforts are focused mainly of this step, proposing innovations able to reduce the energy consumption and improve the liquefaction process efficiency.

The main liquefaction processes and innovations are:
  • C3-MR method, which is the most applied. After the acid gas separation, the NG is dried and pre-cooled to -35°C using propane. Then, it passes through the tubes of a tubes-and-shell heat exchanger, fed a refrigerant to the shell zone. The final temperature of the NG is between -150°C and -162°C. A process scheme is shown in Figure 2.
C3-MR Process
Fig. 2 – C3-MR process scheme [4]
  • AP-X method, which is an evolution of C3-MR process to be applied for large liquefaction plants. The process is based on the integration of LNG sub-coolers with nitrogen coolant used according to the C3-MR method, without increasing the size of the main heat exchanger[5].
  • Cascade method, which sequentially uses propane, ethylene and methane as coolants in a cascade configuration (Phillips Petroleum Company[6]).
  • DMR (Double mixed refrigerant) method, which uses two kinds of mixed coolants (an ethane and propane mix and a nitrogen-methane, ethane and propane mix) and applied by Shell. The process is able to reduce the operating cost of 6-8%, compared to the C3-MR configuration[7].

Since all these configurations require large amounts of energy (mainly for the refrigeration compressors), growing R&D efforts are devoted to the process optimization. The main R&D activities are focused on the cryogenic heat exchanger design and optimization (Air Product and Chemicals Inc. technology[8]), on the improvement of refrigerant compressors (SplitMR technology) and on the efficiency of the compressors’ drivers.

3.LNG transportation technologies

The LNG transportation process can be summarized as follow:
  • Firstly, the insulated tanks placed on the LNG ship have to be inerted to avoid the explosion risk;
  • Then, the tanks are cooled-down to be ready to be charged by cryogenic LNG. The cooling-down process is made spraying into the tanks the LNG, which vaporizes cooling down the environment inside the tank.
  • After tanks cooling, the LNG is pumped from the on-site storage tanks into the vessel tanks.

Basically, two vessel technologies are applied:

  1. The Floating Storage Unit (FSU), able only to transport the LNG and pump it to the on-shore storage tanks in the receiving port (Figure 3[9])
  2. The Floating Storage and Regasification Unit (FSRU), in which the regasification plant is assembled and the regasified NG is then fed directly to the grid (Figure 4[10]).

FSU Vessel

Fig. 3 – FSU vessel for LNG transportation


Fig. 4 – FSRU vessel for LNG transportation and regasification

The R&D on the sector is mainly focused on the improvement of FSRU performance and reduction of costs, being the FSRU an attractive fast track solution for small markets and emerging economies.

4.Regasification technologies

The regasification facilities are able to boil the LNG and to sent it into the NG grid. Almost 100 LNG regasification terminals are now operating worldwide and many others are under-construction, mainly in Europe and Asia.

The most applied regasification technologies are:
  • Open Rack Vaporizer (ORV) – An ORV is a vaporizer in which LNG flows inside a tube and is heated up by seawater, which is fed through the shell (refer to Figure 5). The LNG flows in from an inlet nozzle near the bottom, passes through an inlet manifold and is recovered in an outlet manifold placed in the upper zone. To avoid the ice formation in the lower part of the heat transfer tubes, innovative tube structures are proposed, as the Kobe Steel (SuperORV) one composed a duplex-pipe structure to suppress icing on the outer surface, thus significantly improving the vaporizing performance.
Open Rack
Fig. 5 – Open Rack Vaporizer layout[11].
  • Fluid-type Vaporizer (FV) – a FV is a vaporizer in which the seawater (the heat source) vaporizes the LNG via a heating medium as the propane. The technology was developed by Osaka Gas and is called TRI-EX. The configuration of a FV combines three tubes-and-shell heat exchangers: an intermediate fluid vaporizer, a LNG vaporizer and NG trim heater (Figure 6).

Open Rack

Fig. 6 - Fluid-type Vaporizer schematic layout[12].
  • Submerged combustion vaporizers (SCV) – An SCV has a structure in which a submerged burner burns a fuel-gas, generating the heat needed to vaporize the LNG flow. It comprises a tank, the burner, a bundle of heat-transfer tubes, combustion-air fan and fuel-supply control device.


Fig. 7 - Submerged combustion vaporizers configuration[13].
[8] J. Bukowksi et al., “Innovations in Natural Gas liquefaction technology for Future LNG plants and floating LNG facilities”, International Gas Union Research Conference 2011.
[11] R. Agarwal, “LNG Regasification – Technology evaluation and cold Energy utilization”, Queensland University of Technology, Australia.

EKRT – Electro-Kinetic Remediation Technology for Soil Contaminated by Heavy Metals

Author: Marcello De Falco, Associate Professor, University UCBM – Rome (Italy)

1.Theme description

The effects on the human health and the impact on the environment due to the exposure to and the presence of heavy metals as lead, cadmium, mercury and arsenic have been extensively studied by international bodies as WHO, attesting clearly a significant negative impact also at low metals composition. Although adverse health effects have been known for a long time, the exposure to heavy metals continuously increases due to their extensive use in industry [1] (refer to Figure 1).

Fig. 1 - Global production and consumption of heavy metals during the period 1850–1990 [2]

Specifically, the soil contamination by heavy metals is particularly dangerous for humans and for the ecosystems since most metals do not undergo microbial or chemical degradation and their concentration in soils persists for a long time and it is accumulated. The main risks associated are listed as follow [3]:

  • direct ingestion or contact with contaminated soil;
  • food chain (soil-plant-human or soil-plant-animal-human);
  • drinking of contaminated ground water;
  • reduction in food quality (safety and marketability) via phytotoxicity;
  • reduction in land usability for agricultural production causing food insecurity.

The soil contamination is an increasing issue for the expansion of industrial areas, disposal of high metal wastes, leaded gasoline and paints, land application of fertilizers, sewage sludge, pesticides, wastewater irrigation, coal combustion residues, spillage of petrochemicals [4].

Some technologies have been developed worldwide for the remediation of contaminated soil. The most widely applied are:

  • Immobilization – organic and inorganic amendments are applied to alter the original soil metals to more geochemically stable phases via sorption, precipitation and complexation processes [5]. Among the immobilization technologies, the most used are Solidification/Stabilization and Vitrification.
  • Soil Washing – it is essentially a volume reduction/waste minimization treatment process. There are two soil washing techniques: physical separation, by which the soil particles which host the majority of the contamination are physically separated from the bulk soil fractions; chemical extraction, by which contaminants are removed from the soil by aqueous chemicals and recovered from solution on a solid substrate [6].
  • Photoremediation – it uses vegetation and associated microbiota and agronomic techniques to remove or contain contaminants harmless [7]. The most used technique is the Phytoextraction, i.e. the process where plant roots uptake metal contaminants from the soil to their above soil tissues.

But, the most interesting technology in terms of cost, efficiency and easiness in management is the Electro-Kinetics Remediation (EKRT): an electric field is generated by two electrodes inserted into the ground and encapsulated in extraction wells and the electrically charged metal ions are transported, collected and removed from the soil (a conceptual scheme is reported in Figure 2 [8]).

Electro-Kinetics Remediation technology is known and applied since 20 years, but ENI, with the partnership of the University of Ferrara, has developed an optimized EKRT configuration for heavy metal recovery from contaminated soil, better described in the following paragraph.


Fig. 2 – EKRT functioning scheme

2. ENI’s Electro-Kinetic Remediation Technology

ENI developed an optimized EKRT able to reduce the technology costs and to improve the application easiness, mainly for large-scale use. ENI’s EKRT can be applied to remove from the contaminated soil a wide variety of metals, as Zn, Pb, As, Cd, Co, Fe, Cr, Mn, Cu, Sn.

The main innovations introduced concern:

  • the reduction of the electrodes number, allowing a management simplification and a saving of 30% approx. on the final cost.
  • Commercial electrodes are installed, fabricated with a new production method able to reduce their costs (50% lower than electrodes used a decade ago).
  • Optimization of the electrolytic solution. The solution used by the conventional technology is aggressive and provokes a soil contamination, while the simplified electrolytic solution developed and optimized is able to mobilize the metals without a further pollution of the ground.
  • Easiness of the management and control system. Thanks to the optimized electrodes configuration and to the simplified electrolytic solution, the operative control system is much easier and more robust than the conventional ones.
  • The better performance of the ENI’s architecture allows a reduction of the voltages applied for the electric field generation and of the remediation time, leading to an energy saving and a reduction of operative costs.
Fig. 3 – EKRT configuration developed by ENI and University of Ferrara [9].
  The major benefits can be summarized as follow:
  • In-situ treatment;
  • High efficiency in terms of pollutant removal;
  • It does not require heavy interventions of soil handling (with the exception of the electrode installation), no excavations and/or transportation of polluted material;
  • High level of environmental sustainability and social acceptability;
  • Improved performance than similar solutions on the market;
  • Lower investment costs and lower operating costs (savings up to 50%);
  • Simple management;
  • All the installed equipment can be re-used;
  • wide range of applicability since the technology can be used to remove various metals as Zn, Pb, As, Cd, Co, Fe, Cr, Mn, Cu, Sn by simply varying applied voltage, application time and electrolytic solution.

3.Application and state-of-the-art

ENI has performed EKRT experimental tests on site using real soils. Both the single metal (Hg) and a more complex (many metals) decontamination applications have been assessed, with very promising results in terms of recovery efficiency and operative easiness.

In the following some images, taken from the ENI website, show the electrodes installation and the experimental phases.

Fig. 4 – EKRT electrode installed on-site
Fig. 5 – Electrodes distancing
Fig. 6 – Experimental tests
  ENI patented the EKRT solution (patent application n° MI2012A001889), and the patent approval is in progress.
[2] Nriagu JO. History of global metal pollution. Science 1996; 272: 223–4
[4] S. Khan, Q. Cao, Y. M. Zheng, Y. Z. Huang, and Y. G. Zhu, “Health  risks of heavy metals in contaminated soils and food crops irrigated with wastewater in Beijing, China,” Environmental Pollution, vol. 152, no. 3, pp. 686–692, 2008.
[5] Y. Hashimoto, H. Matsufuru, M. Takaoka, H. Tanida, and T. Sato, “Impacts of chemical amendment and plant growth on lead speciation and enzyme activities in a shooting range soil: an X-ray absorption fine structure investigation,” Journal of Environmental Quality, vol. 38, no. 4, pp. 1420–1428, 2009.
[6] G. Dermont, M. Bergeron, G. Mercier, and M. Richer-Laflèche, “Soil washing for metal removal: a review of physical/chemical technologies and field applications,” Journal of Hazardous Materials, vol. 152, no. 1, pp. 1–31, 2008.
[7] S. D. Cunningham and D. W. Ow, “Promises and prospects of phytoremediation,” Plant Physiology, vol. 110, no. 3, pp. 715–719, 1996.

Dimethyl Ether (DME) Production

Author: Marcello De Falco , Associate Professor, University UCBM – Rome (Italy) 

1.Theme description

DME (Dimethyl Ether) is an organic compound mainly used as aerosol propellant and as a reagent for the production of widely applied compounds as the dimethyl sulfate (a methylating agent) and the acetic acid[1].

Recently, companies as Topsoe, Mitsubishi Co. and Total focus their effort to promote DME as a new and sustainable synthetic fuel that can substitute the liquefied petroleum gas (LPG) or blended in fuel mixture thank to its excellent combustion properties (cetane number = 55-60). DME has the potentiality to be fed into diesel engine, which would be only slightly modified, and its combustion prevents soot formation[2],[3].

Also the DME conversion to hydrocarbons is a relevant emerging market[4]. The processes usually known the general terms “Methanol-to-Hydrocarbons” (MTH), Methanol-to-Olefins” (MTO), Methanol-to-Propylene” (MTP), Methanol-to-Gasoline” (MTG) and Methanol-to-Aromatics” (MTA) are more effective if the starting reagent is DME instead of methanol.

For all these reasons, a projected value of DME market equal to 9.7 bln USD by 2020 is foreseen, with a yearly growth of 19.65% between 2015 and 2020[5].

DME is usually produced  directly from syngas (CO/H2 mixtures with a eventual amount of CO2, typically below 3%) or by dehydration of methanol, which in turn is produced by syngas. Syngas can be generated from fossil fuels (coal, methane) or renewable sources as biomass or renewable electricity. Moreover, there is a growing interest on direct DME production from CO2-rich mixture.

In the following, an overview of DME production processes applied worldwide is reported and, then, the major production plants actually operative are described.

2.Production Processes

2.1 Direct and Indirect production process

In industrial applications, the DME is produced from the syngas by means of two different configurations[6]:

  • one-step process;
  • two-steps process.

In the one-step process (direct production process), DME is produced directly from the syngas in one single reactor where a bifunctional catalyst supports both the methanol formation and the methanol dehydration according to the following reactions scheme[7]:

Methanol formation:      CO + 2H2 ↔ CH3OH             DHo = - 90.4 kJ/mol Water-gas shift:             CO + H2O ↔  CO2 + H2                  DHo = - 41.0 kJ/mol Methanol dehydration:  2CH3OH ↔  CH3OCH3 + H2O         DHo = -23.0 kJ/mol Overall reaction:          3CO + 3H2 ↔  CH3OCH3 + CO2     DHo = -258.3 kJ/mol

The syngas is produced by means of a natural gas steam reforming or coal/petroleum residues gasification and, after the DME synthesis reactor, a purification unit, able to separate the DME from water and methanol in a double distillation stage is needed. The following figure shows a diagram of the one-step process.


In the two steps (indirect) process, the methanol formation from syngas and the DME production from methanol are supported in two separated reactors, where the specific catalysts (copper-based for the first, silica-alumina for the second) are packed. The figure illustrates the block diagram of this architecture.


2.2 DME production from renewable energies

The reactants of the DME synthesis process can be produced from renewable energy as biomass, solar and wind. By this way, the DME is a sort of liquid energy vector, able to store the renewable energy in a easily dispensable, easy applicable and high-energy density fuel.

Starting from biomasses as energy crops, agro-residue, forest residue, etc., a gasification process can be applied to generate a syngas stream to be fed to one-step or two-steps DME synthesis process[8]. On the other hand, if the starting biomass is an organic trash, manure or sewage, an anaerobic digestion + pyrolysis system can be applied to generate the CO and H2 stream[9].


The hydrogen stream in the syngas mixture can be generated by an electrolyzer supplied by electricity produced from renewable power plants as photovoltaics and wind farms and then mixed with CO/CO2. By this way, the renewable energy is “stored” in the DME, which, being a liquid fuel, can be easily distributed, stored and used, differently from the hydrogen itself which has a series of unsolved issues related to the distribution and storage. The following scheme shows a conceptual layout of the DME production from solar/biomass energy.


2.3 DME production as a CO2 valorization process

Instead of the syngas, a CO2-rich feedstock can be supplied to the DME production process, thus converting the CO2 in a high added value product. By this process, the CO2, which is the main GreenHouse gas (GHG), is not emitted but is converted into a fuel which can be burned releasing again the carbon dioxide[10],[11],[12].

Such a configuration is less developed than the conventional syngas-fuelled process, but many research efforts are devoted to improve its performance since it would allow both the production of DME and the reduction of GHG emissions, thus reducing the carbon footprint of DME synthesis.

CO2 presence in the reactor environment leads to two main issues:

  • CO2-rich feedstock influences the active state of the catalyst for methanol synthesis, reducing the rate of formation of methanol[13];
  • CO2 promotes the reverse Water Gas shift reaction, thus producing H2O and inhibiting the methanol dehydration.

The research is focused mainly on the development of new catalyst, tailored for CO2-rich mixture conversion, and of selective membranes able to remove water from the reaction environment, promoting the methanol dehydration reaction and the DME production[14],[15].

3.Operative plants and new frontiers

The one-step and two-steps DME production processes are relatively well established, with a number of companies proposing the one-step (Topsoe, JFE Ho., Korea Gas Co., Air products, NKK) or two-steps (Toyo, MGC, Lurgi, Udhe) architecture[16].

Among the many applications for DME industrial production, the most interesting are listed below:

  • TOYO company has developed a indirect DME production catalyst and technology, fabricating a DME synthesis plant able to be installed in methanol production plant. The high performance MRF-Z® reactor[17], which has the features of multi-stage indirect cooling and a radial flow to the methanol synthesis unit, has a capacity up to 6,000 ton/day in a single train.
  • The MegaDME process is a combination of Lurgi MegaMethanol (capacity > 5000 tons/d)[18] and a Dehydration Plant.
  • China is the world leader of DME production and use. Currently, there are various DME to Olefins and DME to Propylene facilities in China, while many other projects are advancing toward completion. Fourteen to fifteen facilities are expected to be operational by 2016. Most of them are based on the double-function catalyst developed by the Dalian Institute of Chemical Physics (DICP) for the one-step process[19].
  • Methanol-to-Gasoline (MTG) is also an emerging demand segment. Today, six plants use the ExxonMobil’s MTG two-steps technology, with DME as intermediate[20]. In the figure below, the New Zealand SynFuel MTG plant is shown.
  • In Piteå (Sweden), a bio-DME demonstration plant is located. It started the operation in 2010 and it is based on the black liquor (a high-energy residual product of chemical paper and pulp manufacture) gasification process, able to produce a high-quality syngas which then is fed to a DME synthesis unit. The DME produced is, therefore, derived from a renewable energy source (refer to the following figure[21]).


  • Fuel DME Production Co, a company of Mitsubishi Gas Chemical, has fabricated a DME production plant in Niigata Factory (Japan), with a capacity of 240 tons/day and which is fed by a methanol stream transported by pipelines (refer to figure [22]).


The new research studies on the DME production process are mainly based on:
  • the testing and validation of more efficient catalyst for one-step process[23];
  • new reactor configurations as slurry reactors[24] and membrane reactors[25];
  • efficient distillation processes as dividing-wall column (DWC) technology and reactive distillation (RD) for DME purification[26].

[2] T.H. Fleisch, A. Basu, R.A. Sills, Introduction and advancement of a new clean global fuel: The status of DME developments in China and beyond. J. Natural Gas Science and Eng. 9 (2012) 94-107.
[3] S.H. Park, C.S. Lee, Applicability of dimethyl ether (DME) in a compression ignition engine as an alternative fuel. Energy Conv. and Management 86 (2014) 848-863.
[4] P. Tian, Y. Wei, M. Ye, Z. Liu, Methanol to Olefins (MTO): From Fundamentals to Commercialization,. ACS Catal. 5 (2015) 1922-1938.
[6] M. Migliori, A. Aloise, E. Catizzone, G.Giordano, Kinetic Analysis of Methanol to Dimethyl Ether Reaction over H-MFI Catalyst. Ind. Eng. Chem. Res. 53 (2014) 14885-14891
[7] E. Peral, M. Martín, Optimal Production of Dimethyl Ether from Switchgrass-Based Syngas via Direct Synthesis. Ind. Eng. Chem. Res. 54 (2015) 7465-7475.
[10] C. Ampelli, S. Perathoner, G. Centi, CO2 utilization: an enabling element to move to a resource-and energy-efficient chemical and fuel production. Phil. Trans. Royal Soc. London A: Math., Phys. and Eng. Sciences 373 (2015) 20140177.
[11] S. Perathoner, G. Centi, CO2 recycling: a key strategy to introduce green energy in the chemical production chain. ChemSusChem 7 (2014) 1274-1282.
[12] F. Pontzen, W. Liebner, V. Gronemann, M. Rothaemel, B. Ahlers, CO2-based methanol and DME - Efficient technologies for industrial scale production. Catal. Today 171 (2011) 242-250.
[13] G. Centi, S. Perathoner, S. Advances in Catalysts and Processes for Methanol Synthesis from CO2, In: CO2: A valuable source of carbon. M. De Falco, G. Iaquaniello, G. Centi (Ed.s), Springer-Verlag London 2013, Ch. 9, p. 147-169.
[14] N. Diban, A.M. Urtiaga, I. Ortiz, J. Ereña, J. Bilbao, A.T. Aguayo, Influence of the membrane properties on the catalytic production of dimethyl ether with in situ water removal for the successful capture of CO2. Chem. Eng. J. 234 (2013) 140-148.
[15] I. Iliuta, F. Larachi, P. Fongarland, Dimethyl Ether Synthesis with in situ H2O Removal in Fixed-Bed Membrane Reactor: Model and Simulations. Ind Eng. Chem. Res. 49 (2010) 6870-6877.
[16] Z. Azizi, M. Rezaeimanesh, T. Tohidian, M.R. Rahimpour, Dimethyl ether: A review of technologies and production challenges. Chem. Eng. and Proc. 82 (2014) 150-172.
[25] F. Samimi, M. Bayat, D. Karimipourfard, M.R. Rahimpour, , P. Keshavarz, A novel axial-flow spherical packed-bed membrane reactor for dimethyl ether synthesis: Simulation and optimization. Journal of Natural Gas Science and Engineering 13 (2013) 42-51.

Enriched Methane Production Technologies

Author: Marcello De Falco – Associate Professor,University  UCBM – Rome (Italy)

1. Theme description

The Enriched Methane (EM) is a blend composed by Hydrogen and Methane which can be fed, if the H2 content is lower than 30% vol., into conventional natural gas internal combustion engines with a series of benefits in terms of [1] [2][3][4] :

  • improvement of engine energy efficiency;
  • reduction of CO2, CO, unburned hydrocarbons emissions.

The EM can be distributed in the low-medium natural gas grid (if the hydrogen composition is lower than 20% vol. [5] )and stored by using conventional methane storage system, thus its application being competitive using available and low cost infrastructures. Moreover, since hydrogen has the highest mass lower heating value (kJ/kg), the blend’s heating value is greater than those of the methane itself, thus enriching the energy contents.

Basically, if H2 is produced by exploiting a renewable energy source (solar, wind, biomass), the EM is a sort of a hybrid energy vector (fossil + renewable) with a immediate and competitive potentiality to be applied and a reduced environmental impact due to the strong reduction of CO2 emissions (up to 11% wt. if a blend of 30% vol H2 is burned).

In the present article, the main routes to produce EM blends are investigated both from fossil fuel and from renewable energies. Then, some applications implemented worldwide are presented.

2. Production Processes

2.1 Enriched Methane production from Fossil Fuels

Natural gas steam reforming is the most used process for the massive production of hydrogen. The process is composed by the following reactions:


and it is strongly endothermic, thus requiring high temperature to achieve high conversion of methane (90% at 850-950°C). In the conventional process, the reactions occur in tubular catalytic reactors placed inside furnace where a share of natural gas (30% approx) is burned to supply the reactions heat duty. But, if a EM stream has to be produced, much lower methane conversion (< 20%) and, consequently, lower operating temperatures (450-500°C) are required to reach the hydrogen content specifics. The main consequence is that the lower thermal level can be targeted concentrating solar radiation by well-know technologies as the Concentrating Solar Power (CSP) developed by ENEA, able to heat up a molten salt stream up to 550°C, reaching a thermal level suitable for the process requirements [6]. By this way, the hydrogen is produced exploiting a renewable source, improving the environmental footprint. The following figure shows a conceptual block scheme of the technology: after the low temperature reforming, a water gas shift reactor is installed to allow the conversion of CO into H2 and CO2; then the unreacted steam water is removed by condensation and the CO2 by an ammine-based absorption, while the EM stream is sent to the application.


A variation of the process is the Partial Oxidation Methane Reforming, where the heat duty is supplied thanks to the combustion of a share of input methane directly inside the adiabatic reactor. By this way, the energy needed to produce the hydrogen is fed by a fossil source.

Another process is the coal gasification, able to produce syngas (a mixture of methane, carbon monoxide, hydrogen, carbon dioxide and water vapor) from coal and water, air and/or oxygen. After the gasification reactor, a proper purification system allows to obtain a EM stream with the desired H2 composition.

2.2 Enriched Methane production from Renewable Electricity

Hydrogen can be produced from electricity by means of electrolyzers [7], which are able to dissociate the water molecule into hydrogen and oxygen. The electricity can be produced by renewable power plants as solar photovoltaic, wind farms, hydroelectric plants, etc., so that the hydrogen produced is completely CO2-free. Then, mixing the hydrogen with a methane stream, the EM blend is obtained and can be distributed by means of the natural gas grid. The following figure shows the renewable EM plant configuration.

By this architecture, it is possible to convert renewable electricity surplus into a high-added value product as EM, mitigating the intermittent nature of the renewable energy and avoiding overloading of the electricity network.


2.3 Enriched Methane production from Biologic Processes

The biological hydrogen production by photosynthetic bacteria, algae or fermentative microorganism appears to be a promising alternative to produce EM.

In anaerobic digestion process different microorganisms are involved to produce methane from complex biomass (as food wastes, organic fraction of municipal solid waste, agro-industrial waste, algae, etc.) through four steps: hydrolysis, acidogenesis, acetogenesis and methanogenesis [8].

To produce EM, a two-phase processes has to be implemented, by which an appropriate separation of acidogenic and methanogenic phases allows to convert the complex organic material into hydrogen, carbon dioxide and volatile fatty acids during the first stage, and then a conversion of these biodegradable compounds into methane and carbon dioxide during the methanogenic stage.

Moreover, processes able to convert a biomass (solid or liquid) into syngas (CO + H2), as the gasification, can be applied to produce EM. The gasificator can be coupled with a water gas shift reactor where the following reaction is promoted:

fomule 2

producing hydrogen from carbon monoxide. Then, the hydrogen is purified from CO2 and the trace of CO, mixed with methane and used.


Some EM pilot applications have been implemented worldwide. Among them, the following have to be cited:

  • Mhybus Project [9]: a EM-fuelled bus has been developed and circulated on urban roads in the city of Ravenna. The bus ran for more than 45,000 km on a normal service line, with an average of 212 km per day and more than 10.000 passengers on board, attesting the EM application feasibility. A yearly saving of 419 € for each bus using EM instead of natural gas has been quantified.
  • ALT-HY-TUDE Project tested two bused fuelled by Hythane® (a mixture of 20%vol H2 and 80%vol CH4) in the city of Dunkerque [10]. The project has been lead by the Research Division of Gaz de France. The hydrogen is produced by an electrolyzer in a specific filling station and then it is mixed with hydrogen to be fuelled in the bus with a natural gas conventional storage system.
  • METISOL was a research project, funded by a consortium lead by Centro Ricerche FIAT (CRF), focused on the development of a EM production plant coupled with a concentrating solar power plant (CSP). Hydrogen is produced a low temperature steam reformer (500°C) and a pilot plant able to generate 1 Nm3/h of EM (30% vol. H2) has been installed and tested in ENEA laboratories [11].
  • Malmö Hydrogen and CNG/Hydrogen filling station. A hydrogen production plant, connected with a EM filling station and composed by an electrolyzer, has been located in Malmö (Sweden) [12]. The filling station is owned and operated by E.ON Gas Sverige AB. The EM produced (8% vol H2) feeds two local buses, which have been tested for more than 3 years.
fig6 _______________________________________________________________
[1] Bauer CG, Forest TW. Effect of hydrogen addition on the performance of methane-fueled vehicles. Part I: effect of S.I. engine performance. International Journal of Hydrogen Energy 2001;26:55–70.
[2] Bauer CG, Forest TW. Effect of hydrogen addition on the performance of methane-fueled vehicles. Part II: driven cycle simulations. International Journal of Hydrogen Energy 2001; 26:71–90.
[3] Orhan Akansu S, Dulger Z, Kaharaman N, Veziroglu TN. Internal combustion engines fuelled by natural gas–hydrogen mixtures. International Journal of Hydrogen Energy 2004;29:1527–39.
[4] Ortenzi F, Chiesa M, Scarcelli R, Pede G. Experimental tests of blends of hydrogen and natural gas in light-duty vehicles. International Journal of Hydrogen Energy 2008;33:3225–9.
[5] Haeseldonckx D, D’haeseleer W. The use of natural-gas pipeline infrastructure for hydrogen transport in a changing market structure. International Journal of Hydrogen Energy 2007;32:1381–6.
[6] De Falco M, Giaconia A, Marrelli L, Tarquini P, Grena R, Caputo G. Enriched methane production using solar energy: an assessment of plant performance. International Journal of Hydrogen Energy 2009; 34: 98-109.
[8] Cavinato C , Bolzonella D, Fatone F, Cecchi F, Pavana P. Optimization of two-phase thermophilic anaerobic digestion of biowaste for hydrogen and methane production through reject water recirculation. Bioresource Technology 2011;102:8605–8611.
[12] Deployment/13-06-06/339.pdf

Environmental Monitoring in Offshore Oil&Gas Industry

Author: Vincenzo Piemonte, Associate Professor, University UCBM – Rome (Italy) 

1. Theme description

The global environmental situation of the Earth is becoming increasingly problematic and critical. The outlook for our future is increasingly gloomy. The major reason for this pessimistic outlook is the exploding number of people. At the same time, the consumption per person has risen tremendously in the developed countries. There is no doubt that the Earth will not be able to satisfy such increasing demand. Because of the developments described above, radical changes to the global situation and especially to the ecology are ahead. Air pollution, the greenhouse effect, and the noticeable impact of both on coastal areas, especially in the Third World, represent of course important critical points.

Today, the opportunity has fallen to us that we can try to get the necessary information on the overall situation by means of modern remote sensing methods. The advantage of this kind of environmental data supply is that information is obtained worldwide by a single standard, and at regular, short intervals, applying comparable measures. These aspects of regularity and comparability offer great potential because they provide the possibility of producing “snapshots” of the environmental situation at regular intervals.

From a general point of view, environmental monitoring can be defined as the systematic sampling of air, water, soil, and biota in order to observe and study the environment, as well as to derive knowledge from this process [1], [2]. Monitoring can be conducted for a number of purposes, including to establish environmental “baselines, trends, and cumulative effects”, to test environmental modeling processes, to educate the public about environmental conditions, to inform policy design and decision-making, to ensure compliance with environmental regulations, to assess the effects of anthropogenic influences, or to conduct an inventory of natural resources [3].

Environmental monitoring can be conducted on biotic and abiotic components of any of Earth spheres (see figure 1), and can be helpful in detecting baseline patterns and patterns of change in the inter and intra process relationships among and within these spheres. The interrelated processes that occur among the five spheres are characterized as physical, chemical, and biological processes. The sampling of air, water, and soil through environmental monitoring can produce data that can be used to understand the state and composition of the environment and its processes.

Environmental monitoring uses a variety of equipment and techniques depending on the focus of the monitoring. For example, surface water quality monitoring can be measured using remotely deployed instruments, handheld in-situ instruments, or through the application of biomonitoring in assessing the benthic macro invertebrate community [4]. In addition to techniques and instruments that are used during field work, remote sensing and satellite imagery can also be used to monitor larger scale parameters such as air pollution plumes or global sea surface temperatures.

Figure 1 - The five spheres of the Earth System [5]

2. Environmental monitoring applied to offshore Oil&Gas platforms

When conducting oil and gas operations, there is a risk of impacting the marine environment. Generally, environmental authorities set up guidelines to monitor the environmental conditions around oil and gas production platforms.

Using results from a long-term survey programme, it is normally assessed:

  • the environmental state around the platforms compared with a reference station
  • spatial and temporal changes in the environmental state of the seabed around the platform

As part of the monitoring surveys, several samples of sediment (see figure 2) at different monitoring stations can be collected in order to carry out:

  • physical and chemical analyses
  • the identification and quantification of benthic fauna

Physical and chemical analyses on the samples can include:

  • grain size analysis and determination of the median grain size and the silt/clay fraction of the sediment
  • dry matter, loss on ignition and total organic carbon
  • metals – Barium (Ba), Cadmium (Cd), Chromium (Cr), Copper (Cu), Lead (Pb), Zinc (Zn), Mercury (Hg) and Aluminium (Al)
  • total hydrocarbons and polycyclic aromatic hydrocarbons/ alkylated aromatic hydrocarbons (PAH/NPD)

Analyses of the collected benthic fauna can include:

  • species identification
  • biodiversity and abundance analyses
  • biomass of all major taxonomic groups (as total wet weight and total dry weight)
  • precisely determining the biomass of the brittle star Amphiura filiformis, which is known to be sensitive to drilling activities
Figure 2 - Sediment samples being collected around platforms using a HAPS core sampler

Statistical analyses and available literature can be also used to evaluate the environmental state around the platforms.

Generally, to ensure high quality of collected results, all procedures complied with relevant international Health, Safety and Environmental (HSE) standards and with the requirements of local environmental authorities. This included performing the survey in accordance with respect to the:

  • number of samples taken
  • analyses of samples for certain physical, chemical and biological variables

3. An Example of application: UK, Netherlands and Norway case studies [6]

Monitoring activities have been performed in all three countries to look at effects of discharges in the sediments and in the water column. Effects on migrating birds of flaring and light from offshore installations has been monitored at the Dutch Continental Shelf and studies on the effects of seismic activity on fish and marine mammals have been performed on the Norwegian Continental Shelf. An overview of the performed monitoring activities in the United Kingdom, the Netherlands and Norway are given in Tables 1, 2 and 3.

Monitoring of sediments contaminated by discharges of oil-based muds (OBM) has shown that the benthic communities close to the discharge points have been highly modified, and with a transitional zone with detectable effects on benthic fauna and an outer zone with no detectable effects on the fauna. This is shown in all three countries. The areas contaminated with OBM are decreasing and so are the benthic effects. The Dutch study found biological effects out to 250 meters from the discharge point 20 years after the discharge. The latest data from Norway show a total contaminated area of 155 km2 on the Norwegian Continental Shelf. This is chemical contamination and not biological disturbance and the area also includes sites where OBM has never been operationally discharged. Hydrocarbon contamination at these sites may be caused by produced water or accidental spills.

The Dutch study on effects of discharge of water-based muds (WBM) cuttings showed no detectable effect on the benthic community. Norwegian monitoring and one-off surveys have shown a disturbance of the fauna typically out to approximately 50 meters from single wells. The disturbance is most likely caused by the physical impact of the cuttings, and species living in or on the sediment dies. However, a rapid colonization is observed, but the composition of species may change if the grain size is changed. In areas with several production wells the area affected is larger and effects may be caused by other discharges than WBM and cuttings.

Results from Norwegian water column monitoring in the last few years show positive results in the sense that the methods used are now functioning. It is crucial to know enough about how the plume of produced water is mowing to be able to place the cages with test species at the right spots. The results show that caged mussels in the effluent accumulate PAH and that the levels decrease with increasing distance from the discharge.

The biological effects (biomarkers) also show gradients with stronger responses in the cages closest to the produced water discharge. The levels of PAH-metabolites suggest a moderate exposure level. The Dutch study showed an accumulation of naphthalene in blue mussel in a distance of 1000 meters from the platform. The analyses of wild fish in the Norwegian Tampen area have shown increased levels of DNA-adducts in haddocks. A different lipid content or lipid composition of the cell membranes has been shown in cod and haddock from the Tampen area compared to other areas in the North Sea. These effects may be due to the fish feeding on old cuttings piles, and are not necessarily a result of today’s produced water discharges. It is, however, not concluded what these findings mean for the individual fish, the populations or the ecosystems as such.

Other monitoring activities or studies than the monitoring of impacts of discharged have also been performed by the three countries. The Dutch study on birds suggests that the chance that flaring directly impacts a flock of birds is small and only significant at night during the migration periods.

Table 1 -  Sediment monitoring
Table 2 - Water column monitoring
Table 3 - Other monitoring activities

Sound did not appear to have any affect on seabirds or songbirds during migration. But the study calculates that about 10 % of the total bird population crossing the North Sea is impacted in some way by the light emitted from the main deck at offshore installations. The Norwegian study on impacts of seismic surveys on fish showed that impacts (including mortality) on fish and their early life stages only occurred immediately adjacent (< 5 metres) to the sound source. This impact was not significant at the population level and did not affect recruitment into commercial stocks. Fish show a startle response to impulsive sound and the effect may be observed up to 30 km from the source.

[1] Artiola, J.F., Pepper, I.L., Brusseau, M. (Eds.). (2004). Environmental Monitoring and Characterization. Burlington, MA: Elsevier Academic Press.
[2] Wiersma, G.B. (Ed.) (2004). Environmental Monitoring. Boca Raton, FLA: CRC Press.
[3] Mitchell, B. (2002). Resource and Environmental Management (2nd ed.). Harlow: Pearson
[4] The Community-Based Environmental Monitoring Network (CBEMN). (2010). The Environmental Stewardship Equipment Bank.
[5] De Blij, H.J., Muller, P.O., Williams, R.S., Conrad, C., Long, P. (2005). Physical Geography: the Global Environment. Don Mills, ONT: Oxford University Press.
[6] An Overview of Monitoring Results in the United Kingdom, the Netherlands and Norway, OSPAR Commission, 2007

Particulate Emission & Removal Technologies

Author: Mauro Capocelli, Researcher, University UCBM – Rome (Italy)

1. Theme description

Particulate matter (PM) is a complex mixture of micrometric particles and liquid droplets made up of organic soot (VOCs) as well as inorganic particles as soil, dust, metals and acids (nitrates and sulphates). The particle size, fundamental for the transport as well as the health effects, is usually classified by the aerodynamic diameter, the size of a unit-density sphere with equivalent aerodynamic characteristics (Figure 1). This size can vary over four orders of magnitude in the atmosphere; the largest ones (coarse fraction), mechanically produced, include pollen grains, mould spores, wind-blown dust from agricultural processes, sea spray, uncovered soil, unpaved roads or mining operations; the smallest ones (fine fraction) are mainly formed from gases by nucleation and coagulation at a scale lower than 0.1-1 μm (accumulation range). Moreover, secondary aerosol can be formed by chemical and physical reactions in the atmosphere as acidic forms (from sulphuric and nitric acid) and ammonium salts (in the presence of ammonia). The carbonaceous fraction of aerosols is composed by organic matter (either primary or secondary if deriving from the oxidation of VOCs) and elemental carbon (EC, also known as black carbon, BC).

Fig. 1 - Size distribution of particulate matter

Sources and effects Figure 2 represents the contribution of PM pollution from different sectors and activities in european countries. The particles produced by combustion processes represent the largest portion of the anthropogenic sources. Large stationary sources are related to the Power Generation and, in minor part, directly to the oil & gas industry. The major exposure risks are related to domestic heating while transport (urban traffic and emission of the diesel engines of harboured vessels) is the second relevant source inhabited areas. Gas flaring is recognized as an important source of pollution, even though limited to specific zones [1]. The uncontrolled gas flaring can generate emissions of unburned hydrocarbons, particulates and polycyclic aromatic hydrocarbons (PAH). Every year, approximately 140-150 billion cubic meters of natural gas is flared into the atmosphere (equivalent to three quarters of Russia’s gas exports, or almost one third of the European Union’s gas consumption [2]). In 2011 Johnson et al., measured the soot emission from a large gas flare in Uzbekistan; they highlighted a potentially dramatic environmental impact of gas flaring, calculating a soot emission rate of 7400 g/h comparable to ∼500 buses constantly driving and estimable in 275 trillion soot aggregates per second [3].

Fig. 2 - Sector contributions of emissions of primary particulate matter and secondary precursors in EEA member countries [4]

Exposure to particulate matter is associated to serious health effects as respiratory and cardiovascular disease depending on the specific particle size, morphology and composition. The PM size is directly linked to the damaging potential: very fine inhalable particles remain suspended in the atmosphere for a long time traveling long distances from the emitting sources and, once inhaled, reach the deepest regions of the lungs entering in the circulatory system. Generally, the lower is the particle size (and the higher specific area), the higher is its toxicity, also due to absorption of pollutants affecting human health with specific actions (carcinogen and mutagen compounds). Heart attacks with associated premature death, irregular heartbeat, asthma, decreased lung function, and several respiratory symptoms, such as irritation of the airways, coughing or difficulty breathing are among the PM exposure recognized issues [5]. PM pollution is estimated to cause more than 50000 deaths per year in the United States and 200,000 deaths per year in Europe [6]. Fine particles impact extended ecosystems by traveling over long distance, reducing the visibility, polluting ground and surface waters as well as acting on the climate changes and global warming (BC is the second most important climate warming agent after CO2, having a radiative index of 1.1 W/m2). An other climate effect is the cloud formation since they act as water condensation nuclei [5].


2. Removal technologies

The removal technologies utilizes different strategies to separate solid particles from the flowing gas: to intercept particles by acting on the particle size and shape (filtration and scrubbing); to exploit external force fields such as gravitational, electrical and centrifugal.

Filtration In a Fabric Filter (FF), waste gas is forced to pass through a tightly woven or felted fabric, collecting particulate matter on the fabric by sieving and other related mechanisms. Fabric filters can be in the form of sheets, cartridges or bags (the most common type) with a number of the individual filtering units housed together in a group. When low particles loads occur, filter collection efficiency is primary related to the filter pore size and length. High particulate loading forms a “cake” on the filter surface increasing the collection efficiency. Fabric filters are used, primarily, to remove particulate matter (and other hazardous air pollutants in particulate form such as metals) at moderate loads (and gas flow rate limit of 2•106 Nm3/h) down to PM2.5. This technology is useful to collect particulate matter with electrical resistivity either too low or too high for Electrostatic Precipitator, so they are suitable to collect fly ash from low-sulphur coal or fly ash containing high levels of unburnt carbon [7]. The cleaning intensity and frequency are important variables in determining removal efficiency (the dust cake provides an increased fine particulate removal) and the pressure drop across the fabrics (ΔP 100-500 mbar) and the consequent energy requirement (0.2-2 kWh/1000Nm3). Catalytic filtration is commonly adopted in the new generation of diesel particulate filters, DPF, for the automotive application. Commonly the oxidation catalyst and the particulate filter are combined and particles can be burnt off continually. The catalyst filter consists of an expanded polytetrafluoroethene membrane, laminated to a catalytic felt substrate. It is used to separate particulate and eliminate hazardous contaminants from the gaseous phase, such as dioxins and furans, but also aromatics, polychlorinated benzenes, polychlorinated biphenyls, volatile organic compounds and chlorinated phenols. The filtration efficiencies of DPF is > 99% for solid matter (globally > 90% considering a non-solid portion). These systems can be alternatively designed to trap a portion of the total particle load (e.g. the 70% instead of the 100%) in order to obtain a lower back pressure and a blocking risk.

Gravity and Centrifugal force Larger particles can be removed from flue gas by exploiting gravity/mass inertia and internal obstructions. A separator chamber can be installed as a preliminary step to prevent entrainment of the washing liquid with the purified waste gas and/or to remove dust, aerosols and droplets. Also abrasive particles can be treated in order to preserve the downstream equipment. The separation occurs by impact with properly designed internal surfaces, like baffles, lamellae or metal gauzes. The main advantages of separators are the suitability for higher temperatures as well as the lack of moving parts, which determines low maintenance and low pressure drop. On the contrary, the low removal efficiency makes it unsuitable for systems with small density differences between gas and particles. By exploiting centrifugal forces, the separation can be achieved through cyclones. In a purposely designed conical chamber, the incoming gas is forced into circular motion down the cyclone near the inner surface of the cyclone tube. Particles in the gas stream are forced toward the cyclone walls by the centrifugal force of the spinning gas; the larger ones, reaching the cyclone walls, fall down in a bottom hopper where are collected. These simple devices are used to primarily control particles over PM10 (pre-cleaners for more expensive final control devices such as fabric filters or electrostatic precipitators); high efficiency cyclones can be designed to be effective even for PM2.5. The main advances of classical separation chambers are kept in these conical arrangements.

Wet Scrubbing Wet scrubbers (WS) realize the interception of PM through the direct contact with liquid droplets. WS can assembled with variable geometries to each of which optimized in a specific gas flow rate; in relation to the contact dynamics they are arranged in the form of spray towers, packed bed scrubber and Venturi scrubbers (Figure 3). This last realizes the acceleration of the gas stream in a throat to atomize the scrubbing liquid and to improve gas-liquid contact (Figure 3).

Fig. 3 - Schematic of a Venturi Wet Scrubber [8]

Liquid scrubbers are used in case of removal/recover of flammable and explosive dusts as well as treatment of gaseous compounds. Furthermore, WS as the advantage to cool and supersaturate the gas stream leading to particle scrubbing by condensation. WS can operate at medium/high collection efficiency and low cost. On the other hand, the main disadvantages of WS are the risk of corrosion and freezing, the generation of a liquid by-product and the low particle collection efficiency in the 0.1-2µm range.

Electrostatic force The Electrostatic precipitator (ESP) utilizes electrical forces to move particles in gas streams into collector plates. It can be “wire-plate” if gas flows horizontally and parallel to vertical plates of sheet material and wire-pipe if the electrodes are long wires running through the axis of each tube. The entrained particles acquire an electrical charge passing through a corona field generated by discharge electrodes (DC voltage required in the range of 20-100kV). ESP has high efficiency and low pressure drop. Main disadvantages are related to the maintenance of the high voltage generation (electrodes cleaning) as well as the danger of dust explosion after discharges. In 2006 Javorek et al.[9], have realized a comprehensive review of the wet ESP state-of-the-art for gas cleaning (mainly dust or smoke particles). In a single stage ESP, the charging and discharging (collecting at the electrode) take place in one device while in a two stage ESP, charging and removal of the particles occur in separate electric fields (and consequently separate chambers). The two stage ESP is common for small waste gas streams (<90000 Nm3/h) characterized by a high concentration of micrometric and sub-micrometric particles (e.g. smoke or oil mist). EPA gives a detailed overview of ESP types, configurations and designing procedure [10].


3. Innovative technologies

The more stringent environmental evidences and the recent emission regulations are forcing the development of more effective gas cleaning technologies (particularly effective in the submicronic sizes). The existing technologies have low efficiency in the particle diameter range 0.01-1µm, called Greenfield gap region . As aforementioned, the capture of the particulate matter is usually carried out by fabric filters and electrostatic precipitators, which are the actual best available technologies. However, these units shown limited efficiency in capturing particles of submicron or nanometres size. Moreover, the ESP technology is ineffective for particle resistivity out of the range 108-1011Ωcm and for gas streams containing water droplets. On the other hand, FF cannot be used if the water content in the flue gas can produce condense on the cake deposited on the bags. Therefore, a new challenge of the scientific research is the development of new cleaning systems to remove particles from flue gas and the optimization of the existing technologies in order to improve the particle capture of submicronic particles [11]. An example is the research activity in the field of diesel particulate abatement where several strategies are under development, particularly in the ship emission context. As the emissions from diesel ship engine represent an emerging issue, the International Maritime Organization has enforced the environmental regulations. A consortium of European Universities and Industrial Partners developed a modular on-board process combining different units to remove specific primary pollutants (SOx, NOx, PM and VOC) participating to the European Seventh Framework Programm [12]. The PM removal technology, developed by The University of Naples consisted in an innovative upgrade of a wet scrubbing device . In fact, the Wet electrostatic Scrubber (WES) increases the scrubber collection efficiency by sweeping the precipitation chamber with charged droplets. These act as small collectors attracting the particles due to Coulomb force. A practical example of this phenomena is the scavenging of atmospheric aerosol during thunderstorms with the achievement of highest removal efficiency [13]. Different charging and spraying configurations are possible and PM can be charged either negatively or positively with opposite polarity droplets. A commercial application of this interesting technology is the Cloud Chamber Scrubber (CCS) by Tri-Mer Corporation (Figure 4) [14]. It is composed of three zones: preconditioning chamber (A) for the removal of coarse particles and humidity/temperature adjustment; cloud generation vessel (B) for the removal of neutral and negative submicronic particles; second cloud generation vessel (C) with negatively charged droplets so that neutral and positive particles are captured. Afterwards treated air flows through a mist eliminator, before discharge (particles between 0.1 and 2.5 µm).

Fig. 4 - Layout of the Cloud Chamber Scrubber [14]
[1] Journal of Environmental Protection, 2011, 2, 1341-1346
[3] Environ. Sci. Technol. 2011, 45, 345–350
[4] European Environmental Agency:
[5] D'Addio, L., 2011. PhD Dissertation. Wet electrostatic scrubbing for high efficiency submicron particle capture.
[6] Mokdad, Ali H., et al. 2004. "Actual Causes of Death in the United States, 2000." J. Amer. Med. Assoc. 291:10:1238
[7] D'Addio, L., 2011. PhD Dissertation. Wet electrostatic scrubbing for high efficiency submicron particle capture.
[9] Javorek et al., 2006. Environmental Science & Technology 9 Vol. 40, No. 20
[11] M. Giavazzi, L. D’Addio, F. Di Natale, C. Carotenuto, A. Lancia New technologies for the removal of submicron particles in industrial flue gases. Aprile 2014
[12] Di Natale et al., Capture of fine and ultrafine particles in a wet electrostatic scrubber. Journal of Environmental Chemical Engineering 03/2015; 3(1). DOI: 10.1016/j.jece.2014.11.007
[13] Di Natale et al., 2012. New Technologies for Marine Diesel Engine Emission Control Chemical Engineering Transactions 01/2013; 32(2012):361-366 D’Addio et al., A lab-scale system to study submicron particles removal in wet electrostatic scrubbers Chemical Engineering Science 06/2013; 97:176–185.

Nanotechnology in Oil Industry

Author: Andrea Milioni – Chemical Engineer – on Cooperator Contract - University UCBM – Rome (Italy)

1. Theme description

The enhancement in nanoscale-structured materials represents one of the most interesting innovative aspects bringing technological advances in many industries. Nanoparticle technology developments essentially concern materials engineering with the possibility of new metallic alloys ensuring high strength, low weight and high resistance to corrosion and abrasion. However, these materials can appear in different forms, from solid to fluid, with the possibility to have ad hoc nanoparticle-fluid combinations.

The upstream oil & gas industry could receive a great boost under the impulse of innovations in this field being based on processes exposing the equipment materials to extreme work conditions. Moreover, the developments of nanotechnology associated with suitable simulation tools allow to characterize interfacial phenomena between minerals and fluids (wettability etc.), causing a better understanding of the mechanisms concerning recovery of hydrocarbons. Currently, the shale gas and oil production increases the need of nanotechnology enhancement to better characterise the organic content in shale nanopores.

Almost every oil & gas company is heavily investing in nanotechnologies to enhance oil recovery, to improve equipment reliability, to reduce energy losses during production, to provide real-time analytics on emulsion characteristics; to develop high-performance products (e.g. high performance lubricating oils have a great relevance in oil industry). In the following, some recent applications in these fields, will be described.


2. Enhancement in oil recovery

The use of nanoparticles in Enhanced Oil Recovery (EOR) is one of the most important fields of application as it provides larger amounts of oil during the extraction, thus ensuring a faster return on investment. Different techniques using nanotechnology are being considered and very promising appears to be the use of nano-robots for real time insight into the well pad. These tiny robots will be able to provide operators with useful information to better conduct the drilling operations, for example adapting the additive mixtures or the operating pressure dynamically. In the EXPEC Advanced Research Centre has been realized some important works about the use of nano-robots in oil & gas reservoirs designing reservoir robots (called Resbots) used as nano-reporters. The main difficulty lies in adapting the resbots physical and chemical properties in order to pass through the tiny pores and then to recover them, but some experiments brought good results [1]. By adding some sensors inside the robots, very important information will be obtained.

EOR could also be guaranteed by the use of nanoparticles dispersed in suitable fluids. Recently, Ogolo et al. [2] performed some EOR experiments using different nanoparticles like magnesium oxide, aluminium oxide, zinc oxide, zirconium oxide, tin oxide, iron oxide, nickel oxide, hydrophobic silicon oxide and silicon oxide treated with silane showing enhanced recovery and boosted hydrocarbon production. The effects resulting from the use of these substances are related to the change of rock wettability, reduction of oil viscosity, reduction of interfacial tension, reduction of mobility ratio and permeability alterations. A further example of using nanoparticles (in order to improve the oil recovery efficiency) as an additive during operations has been provided by University of Alaska Fairbanks [3] where some researchers highlighted the important performances guaranteed by the use of metal nanoparticles dispersed into supercritical CO2, responsible of  the heavy oil viscosity reduction with consequent increasing of recovery efficiency.

 Figure 1 - Chemical flooding method for Enhanced Oil Recovery [4]

3. Improvement in equipment reliability

One of the main problems in the oil & gas industry is the use of materials capable of withstanding highly corrosive environments. The use of sour crude is highlighting this problem, reducing the equipment lifetimes, particularly for pipelines and heat exchangers. The need to solve these problems has led to research in the field of nanotechnology, in order to develop nanostructured coatings able to increase the corrosion resistance. For example, Saudi Aramco, in collaboration with Integran [5], has  realized an important research in this field carrying out a product development program called "Application of Nanotechnology for In-Situ Structural Repair of Degraded Heat Exchangers". The aim is therefore to develop products able to reduce the corrosion damage and the downtime due to maintenance. In aggressive environments with corrosion and high wear, the use of protective film is complex. Until few years ago electroplated "engineered hard chrome (EHC) was used for surface protection. EHC was preferred to Cadmium (Cd) or Zinc Nickel (ZnNi) electroplated metals because they offer low resistance to wear condition and are quickly removed. Highlighting the chrome toxicity which negatively affect workers, an overcome of EHC has been recently suggested. In this respect, Integran proposes electroplated nanocrystalline Cobalt, called Nanovate CoP which represents an innovative and cost effective overcoming of EHC. In figures 2-3-4 are shown the results of typical corrosion tests [6].

 Figure 2 - Time to red rust following NSS exposure (as per ASTM B117) for nCoP compared to Enduro Industries LLC's ChromeRod and EHC from other industrial vendor


Figure 3 - ASTM B537 protection rating after 24hr CASS testing (as per ASTM B368) for nCoP compared to Enduro Industries LLC's ChromeRod, industrial EHC vendor and multilayer Nickel/Chrome coatings


Figure 4: Time to failure in NSS following magnesium chloride testing for nCoP compared to industrial EHC vendor, nitrocarburized and multilayer Nickel/Chrome coatings.


4. Energy losses reduction

The heat loss during the operations for the oil & gas treatment is a very important problem. It has been estimated that about 50% of the supplied heat is lost in the equipment and this considerably lowers the process efficiency. Researches in this field are leading to the formulation of aerogel solutions that insulate the equipment surface. The use of nanotechnologies in this field is making a major contribution as proof by the realization of innovative products like Nansulate® by  Industrial Nanotech, Inc [7]. Nansulate® allows very low thermal conductivity through the use of nanocomposite called  Hydro-NM-Oxide mixed with acrylic resin and performance additives.

Table 1 - Experimental tests on Nansulate [8] 


Table 2 - Summary of lubrication properties of nanoparticles of different materials as additives

5. Providing real-time analytics on well characteristics

A possibility offered by nanoparticles concerns the real-time analysis of emulsions extracted from wells. This is due to the injection of nanoparticles, then recovered. One of the major companies in this field is MAST Inc. [9] which develops instruments to identify the spectroscopic characteristics of the particles during the extraction operations. The particles contain a magnetic core and are covered by sensitive substances which detected the presence of sulfur, water or gas content. The experience in magnetic sensors has led to the development of techniques to observe them also in a fully opaque stream.

The importance of this technology is growing rapidly after the intense use of fracking, which assures more resources and a new development in oil exploration. However, fracking can also cause significant environmental impacts and therefore requires considerable efforts related to environmental monitoring.  With this respect,  the use of nanosensors enables the development of techniques to preserve the purity of groundwater in the well proximity.

6. Use of nanoparticles for high-performance lubricant oils

The use of nanoparticles in addition to particular mixtures is bringing innovation in different industrial sector, allowing the development of new high-performance products which will positively influence the related industry. One of the most important innovation is offered by the use of a new generation of anti-wear lubricant oils. As shown in different works, experimental results prove remarkable improvements in the tribological behaviour (low wear and increased load-carrying capacity). The lubricant effect of different nanoparticles used as additives depends on material category and essentially concerns the properties of typical nanoparticle materials. These are summarized in table 2 and well described in Guo et al. 2013 [10].

[1] 5: Summary of lubrication properties of nanoparticles of different
[2] N.A. Ogolo et al. 2012, Enhanced Oil Recovery Using Nanoparticles, Society of Petroleum Engineers.
[3] Rusheet D. Shah 2009, Application of Nanoparticle Saturated Injectant Gases for EOR of Heavy Oils, Society of Petroleum Engineers.
[10] Guo et al. 2013, J. Phys. D: Appl. Phys. 47 (2014) 013001

Industrial Lubricant Oils

Author: Andrea Milioni – Chemical Engineer – on Cooperator Contract - University UCBM – Rome (Italy)

1. Theme description

Lubricants are products used mainly in engines to reduce friction among mechanical bodies. Contrary to the majority of petroleum products which are identified through several parameters (the specs), lubricants are commonly identified only by their real performance, which can be tested only experimentally in specialized laboratories. The most important lubricants’ spec is the Viscosity Index (VI), a measure of viscosity variation at different temperatures.

Lubricants are a blend of “base oils” and several additives. Base oils are generally produced from crude oils, but could also be produced by petrochemical feed-stocks (synthetic lubs). Additives are chemicals produced by few oil companies and some chemical company focused on this field as Lubrizol. The effective lubs performance strictly depends on the additives mixture. Additives and base oils are normally commercialized on the market, so the majority of companies buy and blend them. Lubricants, after used (exhaust oils), may be collected and reprocessed in order to obtain “second-hand” marketable products. Lubricants are among the most sophisticated and the most technology-intensive products of refining. Given the lower demand with respect to other petroleum products they are produced only in a limited number of refineries.


2. Mineral Base Oils [1]

The mineral base oils quality strictly depends on the crude origin, also if it can be partially modified through refinery processes. The base oils are a mixture of hydrocarbons,  including alkanes (paraffins), alkenes (olefins), alicyclic (naphthenes), aromatics and some “mixed hydrocarbons” (where in one molecule are different  groups of the above molecules). Regarding the base oils production, the aromatics have a negative impact to the viscosity index. They also worsen the base oils characteristics, meanly increasing the deposit formations and reducing the oxidation resistance.

Over hydrocarbons base oils contain the non-hydrocarbon molecules normally present into crude oil. The main non-hydrocarbon components are sulphur, nitrogen and oxygen. The sulphur heterocyclics are the most abundant of them.

The base oils feed-stock is the vacuum heavy gas-oil and the following units are a solvent extraction to separated aromatics and a deparaffinization to extract heavy paraffines (waxes).

The solvent treatment may be replaced by hydrogen process, e. g. HDC, perfectly integrated and already present in some refinery. This allows good yields and excellent quality bases, although starting from a traditionally unsuitable crude. Figure 1 shows an integrated scheme for production of base oils, either through solvent extraction or through HDC. The process usually ends with a hydrofinishing unit which improves colour, stability, etc. Blending and additivation are the final steps.

 Figure 1 - Integrated cycle of base oil production in refinery (if hydrocracking process is available) [2]

Base oils cuts are internationally classified on the basis of viscosity SUS (Saybolt Universal Seconds) measured at  40 or 100 ° C (100 or 210 ° F). In addition, a code precedes the  SUS viscosity value, such as, for example, SN (solvent neutral) or HVI (High Viscosity Index). The abbreviation BS (Bright Stock) is used for heavier cuts produced by the deasphalted residue. The crudes most suitable for base oil production are paraffinic ones, characterized by a high viscosity index (VI), but also by a high wax content. For certain applications, naphthenic crudes are more suitable because of the high-quality middle and low VI, the reduced content of wax and the naturally low sliding points.

Paraffinic base oils Paraffinic base oils arising from paraffinic crudes are the most widely used.

The characteristics of these base oils depend on the original hydrocarbons composition, as well as on the effect of solvent extraction and de-waxing processes. The paraffinic base oils viscosity index is generally greater than 95 and the pour point is relatively high.

The viscosity index is as higher as stricter is aromatic extraction. It is also possible to increase the index by decreasing the de-waxing strictness but in this case there will be a worsening of the low temperature property.

Naphthenic base oils

Naphthenic base oils are produced from a few crudes (typically from Venezuela) and are currently used in a few applications where low-temperature properties are required and the viscosity index is less important.

These base oils have better solvent power, but low resistance to oxidation than paraffinic ones. Generally, they are also characterised by a low viscosity index (between 40 and 80) and a relatively low pour point due to the absence of paraffins.


3. Synthetic base oils

Most of the synthetic bases has both higher VI and flash points but lower pour points compared to mineral ones. On this basis, these oils are particularly useful in extreme  temperature and pressure conditions.

The synthetic bases such as polyalphaolefins (PAO), alkylated aromatics, esters, polyglycols, polybutenes and  polyinternalolefins (PIO) are widely used in lubricants industry.

Polyalphaolefin (PAO)

Polyalphaolefins show very good characteristics when operating at cold temperatures thanks to the high branching and volatility degree. However in some oxidation tests they appear less resistant than mineral bases (in absence of additives). This behaviour is due to the absence of natural antioxidants, present in the mineral oils. PAOs are less polar and thus they have low solvent power (solvency). This comes at the expense of ability to solubilise the polar additives present in the lubricating oil and the oxidation products (rubbers) formed during the exercise. The wide range of temperatures where PAO can work, together with the excellent chemical and physical characteristics, allows their use in various application areas.

Alkylated aromatics

The alkylbenzenes have lower characteristics if compared to PAOs but are used in refrigerant oils thanks to their excellent solubility and low pour point.


Generally, they have high viscosity index which make them particularly suitable to obtain lubricating oils for the transmissions but they have a low oxidation resistance.


Polybutenes are cut-resistant polymers and are used as Viscosity Index Improver (VII). They have higher volatility but lower resistance to oxidation and lower viscosity compared to PAOs and esters. In synthetic lubricants, polybutenes are usually combined with esters and PAOs and may affect the control of the lubricant viscosity, arising low deposit’s formation and thickening.

Synthetic esters

The most immediate effect of the ester group on lubricant properties is a lower volatility and an increased flash point. The esters influence other properties such as thermal stability, the solvent power, the lubricity, the biodegradability.

Poly internal olefins (PIO) PIOs are characterized by high viscosity index, excellent rheological behaviour at low and high temperature, low volatility and good thermal-oxidative behaviour. They are employed as lubricants for internal combustion engines or industrial machineries.  

4. Non conventional base oils

Non conventional base oils (NCBO) are produced from vacuum cuts treated through  hydrogen-processes. The two main processes are hydrocracking and waxes hydro-isomerization. NCBOs offer two important advantages: hydrogen-processes can replace solvent extraction, reducing the dependence on crude origin and they ensure high quality base oils (better than conventional ones) due to a lower volatility, higher viscosity index, better temperature stability and lower sulphur content.


5. Re-refined base oils

The re-refined bases are produced by re-processing exhausted oils which cannot be lost in the environment but must by low collected into authorized centres from where they can be sent to controlled combustion plants or  re-refined.

The re-refining processes, which consist on a treatment for removing volatile and insoluble components and additives, are able to produce lubricant bases with the same characteristics of mineral bases.

Re-refining yields allow to obtain for every 100 kg of exhausted oil about 60 kg of re-refined oil.

The treatment ends with a hydrogen treatment which eliminates or reduces the content of polynuclear aromatics (PNA), carcinogenic agents.


6. Base oil categories

Lubricating base oils are classified according to the physical characteristics and / or production process. The API (American Petroleum Institute) classifies base oils into five groups [3].

Group I - These oils are usually processed with solvents and they have a good degree of solvency, but they are most vulnerable to oxidation and thermal degradation compared to oils processed in different manner. The oils of Group I are used in almost all applications in the automotive and industrial field and are important for the  formulation of lubricating greases.

Group II - Oils subjected to mild hydrocracking and catalytic de-waxing. They have high saturation levels, and good performance in terms of thermal and oxidation stability. These oils are used in a large range of automotive and industrial applications.

Group III - Typically subjected to severe hydrocracking, advanced catalytic de-waxing, and / or hydro-isomerization, they have high viscosity indexes and very good thermal and oxidation stability. They are used primarily in the automotive sector.

Group IV - Oils produced synthetically. The main characteristics relate to low pour points, high viscosity indexes, excellent thermal stability and excellent oxidation stability. These oils are used primarily in the automotive industry, such as high-quality motor oils and transmission oils.

Group V - This group includes base oils which are not present in other groups such as naphthenic, esters and polyglycols.

Table 1 - API Classification of base oils and related production method [4]

7. Lubricants from renewable sources

The development of lubricants is traditionally based on mineral oils due to good technical properties and reasonable price of mineral oil. A disadvantage of mineral oil is its poor biodegradability which may cause environmental pollution.

Consequently, the research has evolved in the field of synthetic esters used as lubricants, exploiting renewable resources for the production of fatty acids.

In this way, the lubricants are sustainable and biodegradable. The physic-chemical properties of esters are able to cover the entire range of technical requirements for the industrial lubricant development, ensuring high performances.

Experimental studies performed on synthetic esters have been done on different types of formulations, meanly in lubricants based on saturated and unsaturated esters [5].

The oxidation stability of saturated ester bases is higher than the one of unsaturated esters. Particularly for rapeseed oil the oxidation stability of saturated esters can be compared to the one of mineral oil bases. Esters exhibit less friction than mineral oil.


8. New trends in lubricant technology

In many industrial applications the technological advancement is strongly linked to innovation in the field of lubricants. For this reason, important efforts are made in order to improve their quality. The objective is twofold: on one hand the duration increasing and the friction reduction; on the other, the reduction of environmental impact due to the use of fossil lubricants.

To meet these challenges researches in the use of ionic liquids as new generation of lubricants are ongoing.

These new systems show a significant improvement in wear and friction. Ionic liquids consist of large molecules, asymmetric organic cations and an inorganic anions. The large size induce widespread charges and reduced electrostatic forces among anion so much to rarely form a regular crystal structure and they may be liquid at room temperature. Ionic liquids have different properties that make them suitable as potential lubricants. Their low volatility, low flammability and thermal stability allows to safely absorb the increase in temperatures and pressures that occur when there is high friction [6].

Another significant advantage is the variety of usable anions and cations, they estimate at least one million of possible combinations, each one with its specific properties [7]. This means that ionic liquids can be made specifically for particular applications with high flexibility. For example the specific tasks may concern the absorption on a surface, a particular reaction, miscibility in a base oil etc.

For well-known lubrication systems, such as steel to steel, as well as for difficult lubricating systems such as steel to aluminium, ionic liquids have been shown to have better performance than available commercial lubricants. However ionic liquids are currently more expensive than conventional lubricants, so they may be limited to niche applications. For these reasons, actually ionic liquids are promising as lubricant additives, where it is possible a  more widespread use.

Numerous nanoparticles used as additives were explored in recent years. The results are very encouraging and show an overall improvement in performance in terms of friction and wear even with concentrations less than 2% (weight). In particular, some particles as CuO, ZnO and ZrO2 showed better performance when compared to the normal additives [8].  
[1] This brief review is inspired from “Encyclopaedia of Hydrocarbons” by ENI, Treccani 2005, Vol 2.
[2] The figure is taken from “Encyclopaedia of Hydrocarbons” by ENI, Treccani 2005, Vol 2.
[4] The table is taken from “Encyclopaedia of Hydrocarbons” by ENI, Treccani 2005, Vol 2.
[5] B. Krzan, J. Vizintin 2004 “Ester Based Lubricants Deriwed From Renewable Resources”, Tribology in industry, Volume 26, No. 1&2.
[6] Minami, I.; Kamimuram, H.; Mori, S. 2007 “Thermo-Oxidative stability of ionic liquids as lubricating fluids”. J. Synth. Lubr., 24, 135–147.
[7] Canter, N. 2005 “Evaluating ionic liquids as potential lubricants” Tribol. Lubr. Technol., 61, 15–17.
[8] A. Hern, Battez, 2008, “CuO, ZrO2 and ZnO nanoparticles as anti-wear additive in oil lubricants”, Elsevier.

Advanced & Alternative Low-Emission Fuels

Author: Mauro Capocelli,  Researcher, University  UCBM – Rome (Italy)

1. Theme description

Global energy demand has dramatically increased in last years and most of the energy needs of the world today (>80%) is still covered by conventional fossil fuels such as coal, petroleum and natural gas (Table 1). The issues of energy efficiency in the fuel production/combustion and storage depletion as well as the increasing concerns about climate changes and environmental pollution related to conventional fuels, are driving the industrial R&D towards the development of alternative solutions. On this basis, this brief review focuses on the most recent strategies in the field of alternative fuel solutions with a specific insight into the low emission strategies of the automotive industry.

Table 1 - World primary energy consumption and percentage of share [1]

The emissions of the conventional fuel combustion are characterized mainly by the presence of carbon monoxide (CO), nitrogen oxides (NOx), Sulfur oxide (SOx), hydrocarbons and Particulate matter (PM). NOx are harmful to human health and act as precursor of tropospheric ozone. Acute CO poisoning can lead to high toxicity of the central nervous system and heart while a chronic exposure causes depression, confusion and memory loss. Carbon monoxide poisoning mainly causes hypoxia by combining with hemoglobin to form carboxyhemoglobin in the blood reducing the oxygen-carrying capacity of the blood. An exposition to more than 20 ppm SO2 can cause death; moreover SOx pollution strongly affects the life of entire ecosystems for the climate influence. Any recent medical research suggests that PM is among the most dangerous pollutants; the effects of its inhalation (both acute and chronic) is nowadays associated with the majority of respiratory diseases, from asthma to lung cancer and also to cardiopulmonary mortality, premature delivery, birth defects, and premature death.[2] Besides these strong environmental impacts, any conventional fuel contributes to generate greenhouse gas emissions causing the well-known climate changes. Wondering what happens when oils will runs out,  Prof. Chris Rhodes asserts that, although the world supply crude oil isn’t going to run out any time soon, it is impossible to follow the current production rate: “ from 1965 to 2005, we see that by the end of it, humanity was using two and a half times as much oil, twice as much coal and three times as much natural gas, as at the start, and overall, around three times as much energy: this for a population that had “only” doubled. Hence our individual average carbon footprint had increased substantially – not, of course, that this increase in the use of energy, and all else, was by any means equally distributed across the globe”[3]. Following the Kyoto Protocol and the subsequent national directives, the industrialized countries are setting stringer regulation policies of emission control for stationary and mobile sources. The main strategies for the development of low-emission vehicles (LEV) are the realization of alternative low-emission fuels for the conventional internal combustion engine vehicle (ICEV) and the development of new high-tech renewable LEV as hybrid and fuel cell vehicles (Fig. 1)[4]. Although these latter are making promising step-forward towards the commercialization[5], they have not still led to a considerable market because of economic, politic and technological barriers. This results in the persistence of ICEV as dominant design. Therefore, this design is the focus of recent R&D effort in order to decrease polluting emissions and to increase energy efficiency of engines (by developing injection, combustion chambers and ignition controlling technologies) as well as by ideating alternative (and more environmental friendly) fuels.

Figure 1 – Flame and flameless firing of heavy fuel oil
(left: flame mode - right: flameless) after Oltra and Saint Jean, 20093

2. Emulsions

The main pollutants in diesel emission, NOx and PM, have peculiar mechanisms of formation hindering the simultaneous reduction of both and making necessary a trade-off between the two possible pollutant emissions. Lowering the combustion flame temperature in order to reduce NOx generally causes disequilibrium in the balance of soot formation and burnout resulting in an increase in PM emissions. On the other hand, particulate emissions can be reduced by increasing the combustion temperature, an operation that results in increased NOx emissions.

Figure 2 - Regimes of soot and NOx formation expressed in terms of flame equivalence ratio (fuel:air ratio) and flame temperature.

One way to overcome this issue is to replace the fuel with emulsions of diesel oil and water (without retrofitting the engine system). Lif and Holmberg gave an extended review of the water in diesel related system[6]. Water emulsions in oil (W/O) are prepared by using surfactants through mechanical (and ultrasonic[7]), chemical[8], or electric homogenizing machine (i.g. water stirring into microdroplets in oil layers). Surfactants, thanks to the presence of both lipophilic and hydrophilic groups, can reduce the oil and water surface tension creating oil-in-water or water-in-oil two phase emulsions (layer of ionic surfactants can also prevent droplet merging). When the emulsion is heated, water droplets vaporizes breaking out the oil layer (microexplosion). The secondary atomization increases the superficial area of the fuels and air and mixing extent. Secondly, the presence of water dilutes the nuclei of soot growth limiting the soot growth rates. Moreover, the presence of water could enhance soot burnout by increasing the presence of oxidizing species. All these cited aspects can contribute to lower the PM emission inhibiting both soot and ash formation. On the other hand, the high latent heat of vaporization of water will act to lower the temperature causing the NOx emission reduction[9]. Nadeem et al. compared the engine and emission performances of emulsified fuels (5–15% of water) using conventional (CS) and gemini surfactants (GS). Their experimental results highlights the potentiality of W/O to significantly reduce the formation of thermal NOx (from more than 700 to 500 ppm), CO, SOx, soot, hydrocarbons and PM (more than 70% of reduction) in the Diesel engines[10].


3. Fuel desulphurization

Conventional techniques for desulfurization of transportation fuels are based on hydro–desulfurization (HDS), in which the sulfur in the fuel is removed as H2S. This technique has the drawbacks of limited efficiency for the low reactivity of benzothiophene and dibenzothiophene and high costs for the operating conditions and the hydrogen implementation. On the other hand, the oxidative desulfurization is based on the conversion of non-polar aromatic hydrocarbons containing sulfur to corresponding sulfones, easily extractable with methanol. This liquid–liquid heterogeneous system, which depends on the mass transfer between the interface, can be enhanced through cavitation, both Ultrasonic and Hydrodynamic. Cavitation is the nucleation, growth, and transient collapse of micrometric gas-vapor bubbles driven by a pressure variation. It induces physical and chemical effects[11] in the reaction system that enhance the kinetics and yield of the process (both mechanical and chemical). The chemical effects are in terms of the generation of radicals through the dissociation of gas and vapor molecules during the transient collapse of the cavitation bubbles. The physical effects in terms of turbulence generation and therefore viscous dissipative eddies, shock waves and microjets can be exploited to create emulsion by reducing the mass transfer limitations[12]. A brief description of ultrasonic desulfurization is given by the Sulphco, Inc., a Nevada corporation; it reported great results in the enhancement of fuel desulfurization showing an impressive translation to sulfone concentration through an innovative treatment with ultrasonic horns[13]. Several innovative application of cavitation desulfurization, from patents to applied research and technology development, appeared in the literature[14]. While in the acoustic cavitation, the pressure variation is given by ultrasonic waves, in the hydrodynamic one, it is realized through properly designed flow restriction operating at different pressures and flow rates. An example of the bubble dimensions and shear stresses at the collapse stage is shown in Figure 3.

Figure 3 - Simulation of bubble radius and bubble wall velocity for different configuration of hydrodynamic cavitation operating parameters[15].

4. Alternative fuels

The generally shared belief that the upcoming shortage of oil will accelerate the switch to alternative fuels, all the major oil and automotive companies have alternative fuels research programs[16]. Moreover, the R&D in alternative fuels is often related to environmental friendly strategies. The term alternative fuels comprises hydrogen, compressed natural gas (CNG) and liquefied petroleum gas (LPG), biogas, dimethylether (DME), alcohols such as methanol and ethanol, liquefied petroleum gas (LPG), vegetable oils and fatty acid methyl esters, and blends of these with gasoline or diesel. Therefore, there are different opinions and an ultimate decision about which type of products will dominate the market for vehicle fuels in the future is uncertain and depends on political as well as economic considerations. As visible from the Figure 4, indeed the cost of alternative fuels (ethanol produced from corn in the U.S.) often follows the cost of the equivalent conventional one which is gasoline, the principal market competitor (and rarely is strictly connected to the prices of raw matherials). Generally, the fuels generation can follow the pathway of Natural gas, Biomasses or Electricity. Natural gas is a versatile fuel, employable in modified spark-ignition engines or in dedicated engines[17]. It can be used directly in compressed or liquefied form and converted to methanol, dimethyl ether (DME), gas-to-liquid (GTL) fuel or Fischer–Tropsch diesel. Both PM and NOx emissions from natural gas-derived fuels are very low while sulphur emissions is usually negligible. Liquefied petroleum gas (LPG) is mainly composed by propane and butane (and homologues liquefying at ~800 kPa) and released during the extraction of crude oil and gases of oil refining processes. LPG fuels are based on light low-carbon, clean-burning hydrocarbons and their implementation can bring to substantial reductions of CO, NOx, hydrocarbons and emissions of greenhouse gases. DME (born as an ignition improver of methanol) can be produced from different feedstock such as natural gas, coal, oil residues and biomasses. It has good ignition properties (high cetane number and low auto-ignition temperature); moreover its simple chemical structure and high oxygen content result in soot-free combustion in engines[18].

Figure 4 – Biodiesel Production (millions of gallons/year) in the top world countries (2013) extracted from the 2013 Renewable Energy Data Book[19]

Arcoumanis et al. gave a review of the potential benefits of using DME as alternative fuel in standard compression-ignition engines with slight modification of the conventional system (paying attention to the corrosion and low lubricity related issues)[20]. Hydrogen can be used as a fuel in internal combustion engines and in fuel cells with zero pollutant emission. It can be produced from natural gas as well as water electrolysis. From the economic point of view, its utilization is controlled by the cost and the source of electrical energy. Toyota Mirai, the first commercialized fuel cell car, is recently finding a great success highlighting that the spreading of such kind of technologies is limited only by infrastructural issues: distribution chain, storage and handling (both in vehicles and at gas stations). Although these issues are not yet overcome, hydrogen represents a concrete frontier for the automotive industry. The biomass for fuel production can have various origins, such as black liqueur, forestry residues, or municipal or industrial waste products. The resulting fuels are. Among all the different biomass based fuels, the most accessible ones today are diesel and ethanol. Other resulting fuels are methanol, DME and Fischer-Tropsch diesels while the gasification processes of biomass results in biogas-to-liquid fuels. Biodiesel is conventionally made by transesterification of a triglyceride with methanol (fatty acid methyl ester). It can be used either pure or as blends with regular diesel with the benefit of reduced CO, CO2 hydrocarbon and PM emissions. Biodiesel combustion produces higher NOx emission (to be treated with improved catalytic filters) while reduces the SOx emission to almost zero. Rape seed and sunflower are among the main source of biodiesel edible raw material.  To minimize the reliance on edible vegetable oil and to exploit the naturally available oil plants, Ashraful et al. studied the fuel properties, engine performance, and emission characteristics of biodiesel from various non-edible vegetable oils (karanja, mohua, rubber seed, and tobacco biodiesel) providing a detailed extensive review in this field[21]. Based on their findings (reduce CO, HC and smoke emission) they asserts that non-edible oils have the potential to replace edible oil-based biodiesels in the near future (some controversy arise from the NOx point of view). In 2013 the total biodiesel production was 6,948 millions of gallons with an increase of the 17% from 2012 to 2013. In 2013 the United States led the world in biodiesel production, followed by Germany, Brazil, Argentina, and France and Indonesia (U.S. cost of 3.92 $/gall in 2013)[22]. Because of the reduction of PM emission in oxygenated fuels, the alcohols are particularly attractive as alternative to the conventional ones. Gravalos et al. described the Performance and Emission Characteristics of Spark Ignition Engine Fuelled with Ethanol and Methanol Gasoline Blended Fuels highlighting the mixture properties (reported in Table 2)[23]. Moreover, they can be produced as biofuels (also not linked to the food production). In 2013, the Indian River BioEnergy Center began producing cellulosic ethanol at commercial volumes for the first time and now is among the major technology center in the field of bioenergy. Its goal is to << take wastes and sustainably turn them into advanced biofuel and renewable power>>[24]. Methanol can be produced from coal, biomass or even natural gas while ethanol mainly from sugar cane, starch wheat or wine. All car manufactures have approved the use of E10, a blend of 10% ethanol and 90% gasoline and E5, blend of 5% ethanol and 95% gasoline in the ordinary gasoline cars and these blends are commonly available in the US and in Europe. In Brazil the majority of the cars utilizes neat ethanol or lower level blends produced from sugar cane while in the U.S. the ethanol production (13,300 million gallons in 2013,) is mainly based on corn. In 2013 the U.S. led the world market (57% of the overall production) followed by Brazil at 27% and E.U. at 6% (see Figure 3). To understand the order of magnitude of the number reported above, Figure 5 shows the data (taken from the U.S. Department of Energy report[25]) related to the consumption of renewable and alternative fuel (top) with a comparison to the consumption of traditional fuel (bottom) in the United States (for the year 2013).

Table 2 - Properties of different ethanol and methanol gasoline blended fuels (extracted from ref. 20)
Figure 5 – Consumption of renewable and alternative fuel (top) and of traditional fuel (bottom) in the United States (for the year 2013).
[1] A.M. Ashraful et al. / Energy Conversion and Management 80 (2014) 202–228
[4] V. Oltra, M. Saint Jean / Journal of Cleaner Production 17 (2009) 201–213
[6] A. Lif, K. Holmberg / Advances in Colloid and Interface Science 123– 126 (2006) 231–239
[7] C.-Y. Lin, L.-W. Chen / Fuel 85 (2006) 593–600
[8] C.-Y. Lin, H.-A. Lin / Fuel Processing Technology 88 (2007) 35–41
[9] J. Ghojel et al. / Applied Thermal Engineering 26 (2006) 2132–2141
[10] M. Nadeem et al. / Fuel 85 (2006) 2111–2119
[11] M. Capocelli et al. / Chemical Engineering Journal 210 (2012) 9–17
[12] Bhasarkar et al., Ind. Eng. Chem. Res. 2013, 52, 9038−9047.
[14] Desulfurization process and systems utilizing hydrodynamic cavitation. US 8002971 B2. Production of Biofuels and Chemicals with Ultrasound. Springer Dordrecht Heidelberg New York London 2015.
[15] Capocelli M., et al., 2014. Chemical Engineering Transactions, 38, 13-18
[18] A. Lif, K. Holmberg / Advances in Colloid and Interface Science 123– 126 (2006) 231–239
[19] U.S. Department of Energy, Energy Efficiency &Renewable Energy. 2013 Renewable Energy Data Book
[20] Arcoumanis et al., 2008. The potential of di-methyl ether (DME) as an alternative fuel for compression-ignition engines: A review. Fuel 87 (2008) 1014–1030
[21] A.M. Ashraful et al. / Energy Conversion and Management 80 (2014) 202–228
[22] U.S. Department of Energy, Energy Efficiency &Renewable Energy. 2013 Renewable Energy Data Book
[23] Gravalos et al., 2011. Alternative Fuel, Dr. Maximino Manzanera (Ed.), ISBN: 978-953-307-372-9, InTech.
[25] U.S. Department of Energy, Energy Efficiency &Renewable Energy. 2013 Renewable Energy Data Book


Author: Vincenzo Piemonte, Associate Professor, University UCBM – Rome (Italy)


1. Theme description 

Many efforts have been made to move from today’s fossil based economy to a more sustainable economy based on biomass. The reasons can be summarized as follow:

  • the need to develop an environmentally, economically and socially sustainable global economy,
  • the anticipation that oil, gas, coal and phosphorus will reach peak production in the not too distant future and that prices will climb,
  • the desire of many countries to reduce an over dependency on fossil fuel imports, so the need for countries to diversify their energy sources,
  • the global issue of climate change and the need to reduce atmospheric greenhouse gases (GHG) emissions.

Current global bio-based chemical and polymer production (excluding biofuels) is estimated to be around 50 million tonnes [1]. Examples of bio-based chemicals include non-food starch, cellulose fibres and cellulose derivatives, tall oils, fatty acids and fermentation products such as ethanol and citric acid. However, the majority of organic chemicals and polymers are still derived from fossil based feedstocks, predominantly oil and gas.

Recently, the consumer demand for environmentally friendly products, the population growth and limited supplies of non-renewable resources have opened new opportunities for bio-based chemicals and polymers.

Bio-based goods can be produced in single product processes or in an integrated biorefinery processes producing both bio-based products and secondary energy carriers (fuels, power, heat), in analogy with oil refineries [2][3].

Actually, the main driver for the development and implementation of biorefinery processes is the transportation sector. Significant amounts of renewable fuels are necessary in the short and midterm to meet policy regulations both in- and outside Europe.

A very promising approach to reduce biofuel production costs is to use so called biofuel-driven biorefineries for the co-production of both value-added products (chemicals, materials, food, feed) and biofuels from biomass resources in a very efficient integrated approach.

From an overall point of view, a key factor in the realisation of a successful bio-based economy will be the development of biorefinery systems that are well integrated into the existing infrastructure.

At the global scale, the production of bio-based chemicals could generate US$ 10-15 billion of revenue for the global chemical industry [3].

Figure 1 - Biorefinery system scheme [2]

Biorefineries can be classified mainly on the feedstocks used to produce bio-based goods (see figure 1). Major feedstocks are perennial grasses, starch crops (e.g. wheat and maize), sugar crops (e.g. beet and cane), lignocellulosic crops (e.g. managed forest, short rotation coppice, switchgrass), lignocellulosic residues (e.g. stover and straw), oil crops (e.g. palm and oilseed rape), aquatic biomass (e.g. algae and seaweeds), and organic residues (e.g. industrial, commercial and post consumer waste). These feedstocks can be processed in different unit of a  biorefinery, called platforms. The platforms include single carbon molecules such as biogas and syngas, 5 and 6 carbon carbohydrates from starch, sucrose or cellulose; a mixed 5 and 6 carbon carbohydrates stream derived from hemicelluloses, lignin, oils (plant-based or algal), organic solutions from grasses, pyrolytic liquids. These primary platforms can be converted to wide range of marketable products using combinations of thermal, biological and chemical processes.


2. Biobased Platforms

2.1 Biogas Platform

Actually, biogas production is mainly based on the anaerobic digestion (see figure 2) of “high moisture content biomass” such as manure, waste streams from food processing plants or waste from municipal effluent treatment systems. Biogas production from energy crops will also increase and will have to be based on a wide range of crops that are grown in versatile, sustainable crop rotations. Biogas production can be part of sustainable biofuels-based biorefineries as it can derive value from wet streams. This value can be increased by optimizing methane yield and economic efficiency of biogas production [4] and deriving nutrient value from the digestate streams [5].

Figure 2 - Biogas production system scheme


2.2 Sugar Platform

Sugar platforms can implements processes to degrade sucrose in glucose or to hydrolyse starch or cellulose in glucose. Glucose serves as feedstock for fermentation processes to give a variety of important chemical building blocks.

The hydrolysis of hemicelluloses and then the fermentation of these resulted carbohydrate streams can in theory produce the same products as six carbon sugar streams; however, technical, biological and economic barriers need to be overcome before these opportunities can be exploited. Chemical manipulation of these streams can provide a range of useful molecules (see figure 3).

Indeed, by selective dehydration, hydrogenation and oxidation reactions it is possible to obtain useful products, such as: sorbitol, furfural, glucaric acid, hydroxymethylfurfural (HMF), and levulinic acid. Over 1 million tonnes of sorbitol is produced per year as a food ingredient, personal care ingredient (e.g. toothpaste), and for industrial use [6], [7].

Figure 3 - Sugar platform scheme [2]


2.3 Vegetable Oil Platform

Global oil production in 2009 amounted to 7.7 million tones of fatty acids and 2.0 million tonnes of fatty alcohols [8]. The majority of fatty acid derivatives are used as surface active agents in soaps, detergents and personal care products [9].

Major sources for these oils are coconut, palm and palm kernel oil, which are rich in C12–C18 saturated and monounsaturated fatty acids. Rapeseed oil, high in oleic acid, is a favoured source for biolubricants. Commercialized bifunctional building blocks for bio-based plastics include sebacic acid and 11-aminoundecanoic acid, both from castor oil, and azelaic acid derived from oleic acid. Dimerized fatty acids are primarily used for polyamide resins and polyamide hot melt adhesives.

Biodiesel production has increased significantly in recent years with a large percentage being derived from palm, rapeseed and soy oils. In 2009 biodiesel production was around 14 million tonnes; this quantity of biodiesel co-produces around 1.4 million tonnes of glycerol.

Glycerol is an important co-product of fatty acid/alcohol production. The glycerol market demand in 2009 was 1.8 million tonnes [8]. Glycerol is also an important co-product of fatty acid methyl ester (FAME) biodiesel production. It can be purified and sold for a variety of uses [5].

2.4 Algae Oil Platform

Algae biomass can be a sustainable renewable resource for chemicals and energy. The major advantages of using microalgae as renewable resource are:

  • Compared to plants algae have a higher productivity. This is mostly due to the fact that the entire biomass can be used in contrast to plants which have roots, stems and leafs. For example, the oil productivity per land surface can be up to 10 times higher than palm oil.
  • Microalgae can be cultivated in seawater or brackish water on non-arable land, and do not compete for resources with conventional agriculture.
  • The essential elements for growth are sunlight, water, CO2 (a greenhouse gas), and inorganic nutrients such as nitrogen and phosphorous which can be found in residual streams.
  • The biomass can be harvested during all seasons and is homogenous and free of lignocellulose.

Microalgae can contain a high protein content, with all 20 amino acids present. Carbohydrates are also present and some species are rich in storage and functional lipids. Other valuable compounds include: pigments, antioxidants, fatty acids, vitamins, anti-fungal, -microbial, -viral toxins, and sterols.

2.5 Lignin Platform

Until now, the lignin platforms are mainly based on lignosulfonates (see figure 4). These sulfonates are separated from acid sulfite pulping and are used in a wide range of lower value applications. Major end-use markets include construction, mining, animal feeds and agriculture uses.

Figure 4 - Lignin platform scheme [2]

Besides lignosulfonates, Kraft lignin is produced as commercial product at about 60kton/y. New extraction technologies, will lead to an increase in Kraft lignin production at the mill side for use as external energy source and for the production of value added applications [10].

The production of bioethanol from lignocellulosic feedstocks could result in new forms of higher quality lignin becoming available for chemical applications. The production of more value added chemicals from lignin (e.g. resins, composites and polymers, aromatic compounds, carbon fibres) is viewed as a medium to long term opportunity which depends on the quality and functionality of the lignin that can be obtained [11].


3. Opportunities

The opportunities for chemical and polymer production from biomass has been comprehensively assessed in several reports and papers [12], [13], [14], [15], [16], [17]. Immagine5
Figure 5 - Plastics Europe anticipated biopolymer production capacity (in tonnes/year) by 2015

Bio-PE:Biorenewable Polyethylene; Bio-PET: Biorenewable Polyethylene Thereftalate; PLA: Polylactic Acid; PHA: Polyhydroxy Alchanoates; BP: Biodegradable Polyesters; BSB: Biodegradable Starch Blends; Bio-PVC: Biorenewable Polyvinyl chloride ; RC: Regenerated Cellulose; PLA-B:  Polylactic Acid Blends; Bio-PP: Biorenewable Polypropylene; Bio-PC: Biorenewable Polycarbonate.

An international study14 found that with favourable market conditions the production of bulk chemicals from renewable resources could reach 113 million tonnes by 2050, representing 38% of all organic chemical production. Under more conservative market conditions the market could still be a significant 26 million tonnes representing 17.5% of organic chemical production (see figure 5).

Currently, commercialised bio-polymers (i.e. PLA, PHA, thermoplastic starch) are demonstrating strong market growth. Market analysis shows growth per annum to be in the 10-30%  range [18], [19], [20].

Bio-based polymer markets are dominated by biodegradable food packaging and food service applications.  It can be rationalised that the production of more stable, stronger and longer lasting biopolymers will lead to CO2 being sequestered for longer periods and leads to recycling rather than composting where the carbon is released very quickly without any energy benefits5.

Between the most important players in biorefining, there are Novamont (Italy) leader on biodegradable bags based on Mater-Bi (bioplastic derived from thermoplastic starch); NatureWorks (U.S.A)  leader in the PolyLacticAcid production (a biobased plastic used also for the production of biodegradable bottles) and Biochemtex belongs to M&G Chemicals Group (Italy) specialized in the production of bioethanol of second generation.

[1] Higson, A 2011. NNFCC. Estimate of chemicals and polymers from renewable resources. 2010. NNFCC. Estimate of fermentation products. 2010. Personal communication
[2] Kamm, B., P. Gruber, M. Kamm [ed.]. Biorefineries - Industrial Processes and Products. Weinheim : Wiley-VCH, 2006. ISBN-13 978-3-527-31027-2.
[3] World Economic Forum. The Future of Industrial Biorefineries. s.l. : World Economic Forum, 2010.
[4] Bauer A., Hrbek a, B. Amon, V. Kryvoruchko, V. Bodiroza, H.Wagentristl, W. Zollitsch, B. Liebmanne, M. Pfeffere, A. Friedle, T. Amon. 2007. Potential of biogas production in sustainable biorefinery concepts.  (
[5] De Jong E., Higson A., Walsh P., Wellisch M., 2011, Bio-based Chemicals Value Added Products from Biorefineries, IEA Bioenergy, Task 42 Biorefinery.
[6] Vlachos, D.G. J. G. Chen,R. J. Gorte, G.W. Huber, M. Tsapatsis. Catalysis Center for Energy Innovation for Biomass Processing: Research Strategies and Goals. Catal Lett (2010) 140:77–84
[7] ERRMA. EU-Public/PrivateInnovation Partnership "Building the Bio-economy by 2020". 2011.
[8] ICIS Chemical Business. Soaps & Detergents Oleochemicals. ICIS Chemical Business. 2010, January 25-February 7.
[9] Taylor D.C., Smith M.A., Fobert P, Mietkiewska E, Weselake R.J. 2011 Plant systems - Metabolic engineering of higher plants to produce bio-industrial oils. In: Murray Moo-Young (ed.), Comprehensive Biotechnology, Second Edition, volume 4, pp. 67–85. Elsevier.
[10] Öhman, F., Theliander, H., Tomani, P., Axegard, P. 2009. A method for separating lignin from black liquor, a lignin product, and use of a lignin product for the production of fuels or materials. WO104995
[11] Zakzeski, J., P.C.A. Bruijnincx, A.L. Jongerius, and B.M. Weckhuysen. The catalytic valorization of lignin for the production of renewable chemicals. Chemical Reviews 110 (6), 3552-3599.
[12] Shen, L., Haufe, J., Patel, M.K. Product overview and market projection of emerging bio-based plastics. s.l. : Utrecht Univeristy, 2009.
[13] U.S. Department of Agriculture. U.S. Biobased Products, Market Potential and Projections Through 2025. s.l. : U.S. Department of Agriculture, 2008.
[14] Patel, M., Crank, M., Dornburg, V., Hermann, B., Roes, L., Hüsing, B., van Overbeek, L., Terragni, F., Recchia, E. 2006. Medium and long-term opportunities and risks of the biotechnological production of bulk chemicals from renewable resources - The BREW Project. (
[15] Bozell, J.J., G.R. Petersen. 2010.Technology development for the production of biobased products from biorefinery carbohydrates - the US Department of Energy's "Top 10" revisited. Green Chemistry.12, 539-554.
[16] Werpy, T, G. Petersen. 2004. Top Value Added Chemicals from Biomass, Volume 1 Results of Screening for Potential Candidates from Sugars and Synthesis Gas. (
[17] Nexant ChemSystems. Biochemical Opportunities in the Uniten Kingdom. York : NNFCC, 2008.
[18] Pira. The Future of Bioplastics for Packaging to 2020. s.l. : Pira, 2010.
[19] SRI Consulting. Biodegradable Polymers. [Online] [Cited:17 January 2011.]
[20] Helmut Kaiser Consultancy. Bioplastics Market Worldwide 2007-2025. [Online] 2009. [Cited: 17 January 2011.]

New Catalytic Process for Production of Olefins

Author: Marcello De Falco, Associate Professor, University UCBM – Rome (Italy)

1. Theme description

Olefins, mainly ethylene (C2H4) and propylene (C3H6), are key intermediate and feedstock for the production of a wide number of chemical products, as the polyolefins (polyethylene – PE, polypropylene – PP), Mono-ethylene glycol (MEG), Ethylene Oxide (EO) and derivatives, Propylene Oxide (PO) and derivatives, Polyvinyl chloride (PVC), ethylene dichloride (EDC), Styrene, Acrylonitrile, Cumene, Acetic Acid, etc.

At the present, the worldwide demand of ethylene/propylene is more than 200 million tons per year but the conventional processes suffer for a series of problems as the high cost and low conversion efficiency.

In the following, the traditional technologies, i.e. the Thermal Steam Cracking and the Fluid Catalytic Cracking, are firstly presented. Then the innovation in the olefins production are described and assessed.


2. Olefins production conventional processes

The most used olefins industrial production processes are:
  • Thermal Steam Cracking (TSC);
  • Fluid Catalytic Cracking (FCC).

TSC is a thermal process by which a feedstock, typically composed by naphtha, ethane or propane, is heated up in a furnace composed by both a convection and radiant section, and mixed with steam to reduce the coke formation. The steam addition depends on the TSC feedstock (from 0.2 kg steam to kg of hydrocarbon for ethane to 0.8 kg steam to kg of hydrocarbon for naphtha).

Then the products (ethylene, propylene, butadiene, hydrogen) are quickly cooled down to avoid subsequent reactions (quenching) and then are separated by means of a series of operations (refer to Figure 1).

The reactions structure involved in thermal cracking is complex and, generally, is based on a free radical mechanism. Basically, two types of reactions are supported in a thermal cracking process:

  • primary cracking, with the initial formation of paraffin and olefins;
  • secondary cracking, with the formation of light products rich in olefins are formed.

TSC is an energy intensive process: the specific energy consumption per kg of produced olefin is 3.050 kcal/kg.

FCC is a multi-component catalytic system, where the catalyst pellets are “fluidized” thanks to the inlet steam flow-rate and the cracking process is supported at lower temperature than TSC. A typical block diagram is shown in Figure 2, while a FCC reactor drawing is reported in Figure 3.

Olefins production traditional technologies suffer from inefficiency due to high temperature/high energy costs, complex and expensive separation units and significant CO2 emissions.

As a consequence, a strong interest towards the development of more flexible, more efficient with a lower environmental impact and less expensive catalytic olefin production technologies is growing.

In the following, some of the most interesting technologies developed during the last years are presented and described.

Fig. 1 – Thermal Steam Cracking plant layout [1] 
 Fig. 2 - Fluid Catalytic Cracking block diagram [2]
 Fig. 3 – FCC reactor drawing [2]

3. Innovative Technologies

Advanced Catalytic Olefins (ACO)

The Advanced Catalytic Olefins (ACOTM) technology has been developed by Kellogg Brown & Root LLC (KBR) and SK Innovation Global Technology. The process is an FCC-type with an improved catalyst able to convert the feedstock in larger quantities of ethylene and propylene, with a higher share of propylene than conventional processes (the ratio of produced propylene to produced ethylene is 1 versus 0.7 of the commercial processes). The ACO process produces 10-25% more olefins than the traditional FCC processes, with a reduction of consumed energy per unit of olefins by 7-10% [3].

The plant configuration is composed of 4 sections: riser/reactor, disengager, stripper and regenerator. Figure 4 shows a simplified process scheme, while Figure 5 illustrates a picture of the first ACO commercial demonstration unit, installed in South Korea and with a production capacity of 40 kta of olefins.

Fig. 4 – ACO plant process scheme [3]
Fig. 5 - ACO Commercial Demonstration unit installed in Ulsan (South Korea)
  PCC Process

The Propylene Catalytic Cracking is a fluid solids naphtha cracking process patented by Exxon Mobil and based on an optimization of catalyst, reactor design and operating conditions set able to modulate the reactions selectivity, leading to crucial economic benefits in comparison with the conventional processes.

The PCC process is able to produce directly the propylene at the chemical grade concentrations, thus avoiding the expensive fractionation units. Moreover, the specific operating conditions allows the minimization of aromatics production [4].

Exxon is testing the innovative solutions on tailored pilot facilities.

Indmax FCC Process

The Indmax process, developed by the Indian Oil Corporation, is able to convert heavy feedstock to light olefins. It is a FCC-type process where the reactions are supported by a patented catalyst, able to reduce the contact time and thus leading to higher selectivity to light olefins (ethylene and propylene).

Another crucial characteristic of I-FCC process is the high production flexibility: the process can be easily adjusted to modulate the output, maximizing propylene, gasoline or producing combinations (propylene and ethylene or propylene and gasoline) [5].

Aither Chemicals’ catalytic process

Aither Chemicals, a company located in the U.S., developed an innovative catalytic cracking process for the production of ethylene, acetic acid, ethylene derivatives as ethylene oxide (EO) and ethylene glycol (EG), polyethylene (PE, LLDPE, HDPE), acetic acid derivatives as acetic anhydride, ethylene-acetic-acid derivatives such as vinyl acetate monomer (VAM), ethyl vinyl acetate (EVA) and other chemicals and plastics [6]. The process uses oxygen instead of water steam and, globally, needs much lower energy (-80%) and produces 90% less carbon dioxide, being more environmentally sustainable.

Moreover, the CO2 and CO streams are captured at the outlet of the catalytic process and utilized for producing chemicals and polymer, thus nullifying the GreenHouse Gases emissions.

The production volumes foreseen for the innovative process are 224 ktons of ethylene, 112 ktons of acetic acid, 30 ktons of CO2 and 15 ktons of CO.

Methane-to-olefins processes

Many research efforts are devoted to find new routes and process configurations to convert directly natural gas to olefins by low temperature reactors.

There are two possible methane-to-olefins (MTO) processes:

  • Indirect process, by which methane is converted into syngas, methanol or ethane and then olefins are produced;
  • Direct process, by which olefins are directly produced from the methane in a single conversion step composed by modified Fischer-Tropsch reaction.

Even if the direct route seems to be more interesting, at the present not a good light olefins selectivity has been obtained [7] and the MTO processes are more energy intensive than the conventional cracking technologies. The only pre-commercial scale application has been developed by UOP and Total Petrochemicals in Feluy (Belgium): the plan is an indirect process able to produce ethylene and propylene through methanol and syngas.

Fig. 6 – MTO plant in Feluy, Belgium [8]

Some interesting patents have been produced on the MTO topic [9], [10], [11], as well some accurate scientific publications on important international journals [12], [13], [14].

Propane Dehydrogenation

The company UOP developed an innovative Propane Dehydrogenation (PDH) process able to produce ethylene and propylene at lower cost thanks to a lower energy usage and a more stable platinum-based catalyst [15]. The process, called Oleflex, is divided in three sections: the reaction, consisting of four radial-flow reactors, the product purification and the catalyst regeneration. Fig. 7 shows a process layout. Currently, 6 Oleflex units are installed and produce more than 1.250.000 MTA of propylene worldwide.

Fig. 7 – UOP’s Oleflex process layout [8]
  Shell Higher Olefins Process

The Shell Higher Olefins Process (SHOP) is an innovative olefins production technology, developed by Royal Dutch Shell, based on a homogeneous catalyst and used for production of linear α – olefins (from C4 to C40) and internal olefins from ethene.

The process architecture consists of three steps:
  • Oligomerization (conversion of a monomer or a mixture of monomers into an oligomer, temperature = 90–100°C, pressure = 100–110 bar, polar solvent);
  • Isomerization (molecules rearrangement reaction by a metal catalyst, 100–125° C and 10 bar);
  • Methathesis (alkenes are converted into new products by breaking - up and reformation of C-C double bonds by an alumina-based catalyst, 100–125° C and 10 bar) [17].

At the present, SHOP is widely applied and the worldwide production capacity is 1.190.000 t of linear alpha and internal olefins per year.

Catalytic Partial Oxidation of ethane ENI and the Italian research centre CNR developed an ethylene production process through Short Contact Time – Catalytic Partial Oxidation (CPO) of ethane. The process is supported by a patented monolithic catalyst able to improve the ethylene yield up to 55 wt.% [18].

At the present, the technology has been validated trough a bench-scale unit, by which the optimal operating conditions have been identified. However, the industrial scale application is not ready yet, since an optimization of the CPO reactor design and the improvement of the catalyst reliability are needed.

[4] M.W. Bedell, P.A. Ruziska, T.R. Steffens. On-Purpose Propylene from Olefinic Streams. Davison Catalagram, 94 (2004) – Special Edition: Propylene -
[18] L. Basini, S. Cimino, A. Guarinoni “Short Contact Time Catalytic Partial Oxidation (SCT-CPO) for Synthesis Gas Processes and Olefins Production”. Ind. Eng. Chem. Res., 2013, 52 (48), 17023–17037.

Gasification Process

Author: Andrea Milioni – Chemical Engineer – On Contract Cooperator - University UCBM – Rome (Italy)

1. Theme description

The gasification process is the thermochemical conversion of a carbonaceous solid or liquid to a gas in presence of a gasifying agent: air, oxygen or steam. Compared to this definition, the combustion process could be associated as a gasification one, however, by definition, gasification requires that oxygen supply is lower than the amount required for complete combustion to carbon dioxide and water (the stoichiometric amount). In these conditions, the reaction products are not only carbon dioxide and water but consist of a combustible gas mixture with a given heating value which depends on three variables: feed elemental composition, inlet gas composition (air, oxygen or steam) and gasifier typology. Furthermore the process produces a solid carbonaceous phase (CHAR), condensable vapors (TAR) and ashes.

The gasification can be carried out directly by adding oxygen (or air) and by exploiting the exothermicity of the reactions to provide the energy necessary for the process or by pyrolysis, supplying heat from outside in the complete absence of oxygen. The gaseous products, essentially hydrogen, carbon monoxide, methane and carbon dioxide, may be used for several purposes such as heating, electricity generation and production of chemicals and fuels.

The gasification process, has been developed on an industrial scale during the 19th century to produce town gas for lighting and cooking. Later, the natural gas and electricity replaced it for these applications, and it was used only for the production of some synthetic chemicals. Since the '70s, following the crisis of fossil fuels, the realization of dependence on foreign oil have led to the revaluation of the gasification process, in particular the biomass gasification, driven also by the interest in the reduction of greenhouse gas emissions and in the local availability of renewable energy sources.

2. Foundamentals

The gasification process can be divided into 4 basic steps (sketched in Figure 1) that occur within a suitable reactor: heating/drying, pyrolysis, gas-solids reactions and gas phase reactions [1]. When the reactor design ensures high-speed heat transfer and the feed is introduced as small particles, the whole process takes place in short time (about one second) [2].

Heating and drying: in this first step the temperature reaches about 300°C and the feed is completely dried. The greater the moisture amount, the higher the energy needed for drying, with a lower produced gases enthalpy. For this reason, a naturally dry (or previously dried) biomass is desirable. During the heating there is a typical heat transfer phenomenon, with a temperature profile decreasing towards the particle centre: the greater the radius, the longer the time required for the treatment.

Pyrolysis: in this second step, a rapid thermal anoxic degradation of the carbonaceous material takes place. The ideal temperature for this purpose is between 400 and 500°C.

Released products: Gases: H2, CO, CH4, CO2 and some other light hydrocarbons.

Vapors: The exposition to high temperatures lead to a thermal cracking process generating light and condensable compounds (TAR, Topping Atmospheric Residue) consisting essentially in polyaromatic hydrocarbons.

Solids: residual porous called CHAR consisting in a carbon residue and inorganic compounds (ash).

Gas-Solid Reactions: reactions occurring between CHAR and the added gasifying agent (oxygen, steam, or both). Exothermic reactions, with negative , help to provide energy for the endothermic processes such as drying and pyrolysis.

Immagine 1

Gas-phase Reactions: there are two main gas-phase reactions, respectively, water gas shift and methanation, for the synthetic natural gas production.



Figure 1 - The Process of Thermal Gasification [3]

3. Gasifiers Typology

Depending on the modality of contact between the gasifying agent and the charge, four reactor types can be identified:

  • Fixed Bed;
  • Fluidized Bed;
  • Entrained Flow;
  • Indirect.

The Fixed Bed Gasifiers represent the most consolidated technology thanks to their constructional simplicity, although some difficulties to maintain a uniform temperature along the reactor may arise. These latter involve a series of problems due both to the control system and the quality of the produced syngas. The fixed bed gasifiers are generally used for small-medium size plants (no more than 10-15 tons/hours of biomass). The scaling up to higher potential is very complex because of the impossibility of having a uniform temperature distribution in great size beds.

Depending on the point of product gas intake, different geometries can be classified:

  • Updraft (counter-current);
  • Downdraft (co-current);
  • Crossdraft (cross-current).

The main fixed-bed gasification technologies are known as Lurgi process [4], British Gas Lurgi (BGL) process [5], Wellman-Galusha (WG) process [6] and 100 Ruhr process [7].


Figure 2 - Fixed Bed Gasifiers: Updraft Gasifier (a), Cross Draft Gasifier (b), Down Draft Gasifier (c) [8]
  The Fluidized Bed Gasifiers use, together with the feed, an inert material as sand or dolomite. It promotes the mixing, the kinetics and the heat exchange between the biomass particles improving the gasifier efficiency. A periodic sand replacement is required mostly in presence of biomass  as fuels, in order to avoid the risks of bed agglomeration. The fluidizing agent, usually air also containing steam, is generally added in various steps. Primary air is fed to the bottom of the bed in order to achieve the minimum fluidization velocity of the solid material, also visible in the formation of bubbles in the sand. In fact the beds operating in close proximity to the minimum fluidization velocity are denominated Bubbling Fluidized Bed (BFB).

When the air velocity is increased above these values, there is a particles entrainment, which makes necessary the installation of a cyclone for the reintroduction of the solid particles inside the reactor. This configuration is called Circulating Fluidized Bed (CFB)[9].

In air(or oxygen)-fed fluidized bed reactors, the syngas methane content is relatively low because the reactor operates as an high temperature autothermal reformer.


Figure 3 - Bubbling Fluidized Bed Gasifier (a) and Circulating Fluidized Bed Gasifier (b) [10]

The Entrained Flow Gasifiers accept gaseous, pulverized or slurry feeds. The fuel is fed inside burners in co-current with oxygen and eventually steam. In case of biomass as feed, this must be pulverized or submitted to a preliminary pyrolysis step. The gasification process takes place at temperatures about 1200°C and pressures above 20 bar. These operating conditions lead to a non-leachable molten slag and a very low TAR content syngas production with consequent simplification of the downstream purifying operations. The high operating pressure results in the production of a compressed syngas that can be used directly in synthesis reactions. The high temperature makes necessary an heat recovery from the gases through the coupling with steam and electricity production, in this way it is reached  an important improvement in the process efficiency.


Figure 4 - Entrained Flow Gasifier [11]

In the Indirect Gasifiers, gasification occurs in absence of oxygen therefore without feed combustion. For this reason, the heat required by endothermic reactions must be supplied from outside with steam as gasifying agent. In this configuration, the additional heat can obtained by exploiting an external source or by burning a part of the feed in a separated combustion chamber. The necessary heat amount can be supplied in different ways:

  • Direct transfer to the gasification environment;
  • Increase of the steam quantity or degree of overheating;

Both the equilibrium thermodynamic laws and experimental data prove that, using steam as gasifying agent rather than air or oxygen at temperatures in the range of 800-900°C, the methane content grows significantly.


4. Environmental and economical benefits of gasification

The great and obvious potential of gasification process is mainly linked to the use of  syngas for the production of chemicals such as methanol and fertilizers. Additionally,  in some cases the gasification can have the same purpose (i.g. heat and electricity generation) and the same feed typology of incineration process with benefits mainly related to environmental and economic aspects. The gasification of solid fuels normally used for power production (coal, MSW etc.) allows a considerable pollutants reduction such as SOx, NOx and Hg as well as CO2, which is a major cause of global warming. As regards the CO2, some studies have been performed to compare the  gasification-based power plant emission with a combustion-based subcritical pulverized coal plant [12]. The obtained results show how the use of gasification slightly reduces the CO2toEnergy-ratio (745 g/KWh against 770 g/KWh) but an important advantage lies in an easier CO2 capture, being more concentrated in the exhaust gas. On the other hand the gasification allows an easier sulphur and nitrogen removal. In fact, while with the combustion there is the formation of SOx end NOx which are relatively difficult to remove,  gasification produces different substances: the 93-96% of the sulphur is transformed into H2S and the remaining in COS [13], while nitrogen forms N2 and NH3 that is removed during syngas cleaning. The H2S can be removed by absorption producing elemental sulphur as a valuable by-product, saleable to  fertilizers companies. Furthermore, inside the gasifiers dioxins and furans formation is unflavoured and it is possible a significant particular matter reduction with proper treatment. Unlike ash produced with the incineration process, with gasification the slag can be used in roads bed construction.

Table 1 shows how gasification process approaches natural gas emissions.


Table 1 - A comparison of emissions from electricity-generation technologies [14]

5. Gasification industry

The Gasification Technologies Council has been realized some important researches to analyse the gasification plants industrial development, which are summarised in graphs available at:

Some of them are listed below (Figure 6, 7, 8). By looking at the global market, the gasification in Asia/Australia exceeds the amount related to the other continents put together due to the important growth of chemical, fertilizer and coal to liquids industries in Asia (Figure 6). On the other hand, the countries with large natural gas reserves invest less in this technology. For example in Russia gasification plants are not currently present, while China represents the most relevant investor in this field with the highest number of gasification plants (Figure 7). In conclusion, Figure 8 shows clearly that coal represents the present as well the future of gasifier feedstock.

 Figure 5 - Gasification capacity by geographic region
 Figure 6 - Map of Gasification Facilities
 Figure 7 - Number of gasifiers primary feedstock
[1] R.C. Brown 2003, Blackwell Publishing, Ames, IA
[2] R.C. Brown 2011, Thermochemical Processing of Biomass, Wiley
[3] R.C. Brown 2003, Blackwell Publishing, Ames, IA
[4] He et al. 2013, Applied Energy, Elservier
[5] R.W. Breault 2010, Energies, 3, pp. 216-240
[6] J.G. Speight  2013, Coal-Fired Power Generation Handbook, Wiley
[7] C. Higman and M. Burgt 2008, Gasification, Elsevier
[9] P.Basu 2006, Combustion abd Gasification in Fluidized Beds, CRC Press