Big Data or Big Data analytics refers to a new technology which can be employed to handle large datasets which include six main characteristics of volume, variety, velocity, veracity, value, and complexity.
With the recent advent of data recording sensors in exploration, drilling and production operations, oil and gas industry has become a massive data intensive industry.
Analyzing seismic and micro-seismic data, improving reservoir characterization and simulation, reducing drilling time and increasing drilling safety, optimization of the performance of production pumps, improved petrochemical asset management, improved shipping and transportation, and improved occupational safety are among some of the applications of Big Data in oil and gas industry.
In fact, there are ample opportunities for oil and gas companies to use Big Data to get more oil and gas out of hydrocarbon reservoirs, reduce capital and operational expenses, increase the speed and accuracy of investment decisions, and improve health and safety while mitigating environmental risks.
One of the key enablers of the data-science-driven technologies for the industry is its ability to convert Big Data into “smart” data. New technologies such as deep learning, cognitive computing, and augmented and virtual reality in general provide a set of tools and techniques to integrate various types of data, quantify uncertainties, identify hidden patterns, and extract useful information enormously reducing the data processing time. This information is used to predict future trends, foresee behaviors, and answer questions which are often difficult or even impossible to answer through conventional models.To see more go to full text article
Oil & Gas industries have moved in deeper, more remote and technically demanding regions in the last 30 years. With increasing technical complexity of the extraction facility, the fixed cost of the Oil & Gas upstream complex also increases, but in the persistent lower-for-longer price environment there is continuing pressure to develop these fields safely while reducing CAPEX and OPEX costs.
FPSO technology seems to be promising in offering a flexible solution to explore remote Oil fields while in maintaining competitive costs. Nonetheless, Semisubmersible units, SPAR platforms and tension-leg platforms (TLPs) are also common in deepwater regions. TLPs, in particular, find application in up to 1,500m-deep water wells, but FPSO has the advantage to offer the required onboard storage capacity and offloading capability without employing a separate storage vessel or infrastructure.
The high dynamic motion, generated by the rough sea condition to which FPSO units are exposed when operating in remote sea areas, makes the Riser System design more challenging. In fact, it plays a fundamental rule in determining the feasibility of the extraction of hydrocarbons exploiting remote region resources. Thus, the development of a low-motion FPSO enables the utilization of conventional riser systems (such as steel catenary risers and top-tensioned risers). The use of conventional riser technologies, is also able to improve the life-cycle and reliability of a FPSO facility: the realization of a simple and effective installation (by the means of an additional facility structure) that is able to oppose to the high dynamic forces that rough sea environment exerts on the floating structure, is a technological step change, needed to open up less accessible or economically cost-prohibitive fields.To see more go to full text article
The rapid growth of the world population driven by the development of the industrial sector, have led to an increase of the anthropogenic greenhouse gas emissions. It has been detected an unprecedented, in at least the last 800,000 years, concentration of carbon dioxide (Figure 1‑1) in the atmosphere. Such event, together with other anthropogenic drivers, have been related as the main cause of the phenomena of the “global warming” observed since the mid-20th century.
In order to face the issue raised from the considerations about CO2 concentration, the first worldwide agreement on greenhouse gas emissions was signed in April 2016. The 196 countries responsible for 55% of total CO2 emissions agreed, at the Conference of the Parties in November 2015, to commit to cap global warming at a maximum 1.5°C (referred to the global land-ocean mean surface temperature, GMST), a more challenging target than the 2°C cap originally proposed in the Paris World Climate Conference. Given this commitment, signatory countries need to review their energy strategies in order to reduce emissions by actively promoting low carbon economy policies.
Natural gas is a fossil gas mixture consisting mainly of methane (C1). The remainder is heavier hydrocarbons: ethane (C2), propane (C3), isobutane (iC4), n-butane (nC4), and small amounts of heavier components down to C7s. The typical values of the percentage of methane mole fraction in natural gas may vary from 87% up to 97%.
Among all the fossil primary energy sources, natural gas presents the highest hydrogen to carbon ration. This characteristic is of extreme importance since leads the following two main properties:
According to the proprieties described above, natural gas plays a fundamental role in the fight against climate change. The substitution of high carbon content fossil fuels, such as coal, with natural gas, may represent the first step forward the decrease of CO2 emissions.
The main sectors that will immediately benefit of replacing low hydrogen to carbon fuel with methane in terms of CO2 emissions are:
It is clear that the substitution of “conventional” fuel with methane is just a temporary solution, a clever way to “take time” establishing a transition phase, until the worldwide development of the zero-emission (renewable) energy sources will take place.
Oil & Gas reservoir research and exploration requires the utilization and adaptation of a large number of different technologies spread over numerous engineering fields. Because of the intense resource involved in such operation, the Exploration and Production sector (E&P) results to be a power-demanding field and particular attention should be paid to make it smarter and more efficient.
In the research of technology updates, upstream, as well as downstream, Oil & Gas industry has always been seeking out external innovations even in the field of informatic technologies and robotics.
In Figure 1 a work-class ROV (remote operated vehicle) for subsea exploration is reported during its assembly phase. ROVs are made from robotic arms, known as manipulators, a camera, for subsea environment visual analysis, electrical drivers for motion control and batteries or external cables for communication and power delivery. ROVs for exploration were introduced during the ‘70s and represented a significant technology update in their field: thanks to the fact that they can be designed to operate at very high pressure and low temperature conditions, with the respect to human operators, they allowed to discover a high number of new oil fields that previously were thought impossible to be investigated, increasing the opportunities for Oil & Gas companies. The introduction of ROVs also decreased the cost of the exploration operations and, on top of the economics aspect, they increased the safety by substituting and replacing human operators.
ROVs represent also an example of technology transfer from external sectors (in this case the military sector) to upstream Oil & Gas operations. Technologies that come into the Oil & Gas sector often enter into a prolific chain of innovation and become refined commercialized. That was also the case for ROVs, that having been incorporated for years in the Upstream sector, found new application for scientific research in marine biology and they have been used over the years to search for famous shipwrecks and discover new marine species.
In the following paragraphs, some of the most important new technologies in the E&P sector will be presented and discussed.
In recent years, artificial intelligence (AI), in its many integrated flavors from neural networks to genetic optimization to fuzzy logic, has made solid steps toward becoming more accepted in the mainstream of the oil and gas industry.
On the basis of recent developments in the field of Oil & Gas upstream, it is becoming clear that petroleum industry has realized the immense potential offered by intelligent systems. Moreover, with the advent of new sensors that are permanently placed in the wellbore, very large amounts of data that carry important and vital information are now available.
To make the most of these innovative hardware tools, an operator intervention is required to handle the software to process the data in real time. Intelligent systems are the only viable techniques capable of bringing real-time analysis and decision-making power to the new hardware.
An integrated, intelligent software tool must have several important attributes, such as the ability to integrate hard (statistical) and soft (intelligent) computing and to integrate several AI techniques. The most used techniques in the Oil and Gas sector are:
The techniques described above have been adopted in the Oil and Gas field since 1989. Relatively to O&G industry, Figure 1 shows the number of applications of AI.
In the following sections some of the application of AI in the O&G sector will be analyzed with a particular focus on the Drilling operation (Exploration & Production).
The cause of climate change is attributed to the significant increase of greenhouse gases (mainly CO2) in the atmosphere, able to trap heat radiating from Earth toward space. By means of the analysis of ice cores it has been discovered that, for millennia, the concentration of carbon dioxide in atmosphere has been below 300 ppm. As it is shown in the Figure 1, such threshold was broken in 1950 and, since then, the concentration of CO2 has never stop growing reaching in 2019 the value of 410 ppm.
According on the considerations mentioned above, the 21st century is indeed recognized as the “era of climate change” mainly characterized by the increase of the land-ocean mean surface temperature (GMST) and, as a consequence, by other environmental phenomena such as the increase of the average sea level and the retreat of glaciers.
The reason why the amount of GHGs in the atmosphere is increasing so rapidly is strictly connected to the growth of the world population driven by the development of the industrial sector. Since the mid-20th century the anthropogenic CO2 emissions have raised exponentially (see Figure 2) in line with the trend detected of the carbon dioxide concentration in atmosphere. On top of this, the human action is identified as the main cause of the global warming.
The sign of the Paris Agreement (Paris climate conference - COP21, December 2015), the first-ever universal, legally binding global climate change agreement, represents an important act to the fight against the climate changes. Major players of the Oil & Gas and Energy sector are financing the development of sustainable technologies in order to diminish their significant carbon footprint. The actions of mitigation of the emissions of carbon dioxide are mainly directed to the main sources of CO2 which, as shown in the Figure 3, comes from the combustion of coal, oil and gas, and from the operations of flaring and cement production.
Every day, hundreds, if not thousands, of oil spills are likely to occur worldwide in many different types of environments, on land, at sea, and in inland freshwater systems.The spills are coming from the various parts of the oil industry, mainly during:
The sea environment is particularly subjected to oil pollution. It is estimated that approximately 706 million gallons of waste oil enter the ocean every year. According to the data of oil spills in the United States published by the Environmental Research Consulting (ERC), large spills (over 30 tons), which the 0,1% are incidents, represent the 60% of the total amount of oil spilled. Despite the latter information, 72% of spills are of smaller amount (0.003 to 0.03 ton or less) as shown in (Figure 1‑1).
Naturally, the relatively rare large spill incidents get the most public attention owing to their greater impact and visibility, for this reason it is impossible to measure the entity of damage only considering the size of spillage. Location and oil type are extremely important. Significant efforts have been made to study oil spills after the Exxon Valdez spillage of 1989 (Figure 1‑3). However, such knowledge has not kept pace with the growth of oil and gas development. In 2010, in the Gulf of Mexico, took place the Deepwater Horizon oil spill (Figure 1‑3) considered one of the most catastrophic environmental disasters in human history. In such occasion, over 4.9 million barrels of crude oil were released involving 180,000 km2 of ocean.
Timely and highly efficient responses can lead to more hopeful outcomes with less overall damage to the environment. The most used clean response devices and techniques are (Figure 1‑2):
Awareness of climate change impacts and the need for deep decarbonization has increased greatly in recent years. In 2018 the EU published its vision for the future of energy in Europe ‘A Clean Planet for All’ which aims at creating a “prosperous, modern, competitive and climate neutral economy by 2050.” A set of pathways has been developed and assessed that rely heavily on renewable energy and energy efficiency, with a role for natural gas and hydrogen.
The need to accelerate clean energy transitions is underscored by recent data: CO2 emissions rose for a second year in a row in 2018 to reach a record high.
In response to this growing awareness and the urgency of decarbonization, policy makers have taken action and in 2015 agreed to what is known as the Paris agreement. This has set the target to limit the expected global average temperature increase to significantly less than 2°C, with the ambition to keep to the limit to less than 1.5°C. In order to achieve such necessary and ambitious targets, the European economy, and in particular the energy sector, needs to significantly reduce CO2 emissions to a fraction of current levels (e.g. -80%, -95%) with a growing consensus that net zero emissions will be required. Many changes will be required in how we work, travel, heat our homes and how we obtain the energy necessary to carry out all these activities, as shown in Figure 2.
A key feature of hydrogen is its ability to act as both a source of clean energy (for a variety of uses), and an energy carrier for storage. Hydrogen can be transported through existing pipelines, mixed with natural gas, and through dedicated pipelines in the future. It offers an energy storage solution that costs ten times less than batteries.
Hydrogen is already widely used for industrial purposes across the steel, petrochemical and food sectors, but it is now also being used in mobility. In the future, it could also replace natural gas to heat residential and commercial buildings. Hydrogen can also be transformed into clean electricity by injecting it into fuel cells.
The most interesting thing about hydrogen, is that it does not generate carbon dioxide emissions or other climate-changing gases, nor does it produce emissions that are harmful for humans and the environment. For this reason, it will play a key role in ensuring that European and global decarbonisation objectives are achieved by 2050.
Low-carbon hydrogen from fossil fuels is produced at commercial scale today, with more plants planned. It is an opportunity to reduce emissions from refining and industry.
An early conception of “green chemistry was developed in 1990 by P. Anastos and J. Warner trough 12 principles ranging from prevention and atom economy to pollution prevention and an inherently safer chemistry. These principles, described below, offer a protocol to adhere in developing novel chemical processes.
Today, more than 98% of all products and materials needed for modern economies is still derived from petroleum and/or natural gas, generating substantial quantities of wastes and emissions.
An exaggerated, but illustrative, view of twentieth century chemical manufacturing can be written as a recipe:
The recipe for the twenty-first century will be very different:
A typical example of the twentieth century chemical manufacturing production model is represented by plastic materials, which are also a typical example of linear economy: no-renewable resources, oil or ethane in this case, are used to produce plastic materials, which at the end of life become wastes and dispersed into environment. Today, some about 8 million of metric tons escapes into the world’s oceans each year, most of it from countries in South East Asia, where plastics use has outplaced waste management infrastructure and the situation is approaching catastrophic proportions.
The green chemistry approach is the correct way to deal with the actual environmental situation, representing a promising strategy of future economic development also for industrialized countries.
Paul Anastas, then of EPA, and John C. Warner developed the Principles of Green Chemistry (Figure 1), which help explain what the definition means in practice. The principles cover such concepts as:
Natural gas (NG) and liquefied NG (LNG), which is one trade type of NG, have attracted great attention because their use may alleviate rising concerns about environmental pollution produced by other fossil fuels as coal and oil.In the figure below, the typical components of NG are reported giving also the idea of their relative amount:
There are two main distinctions in between the final products obtained from gas processing: Pure natural gas liquids, meaning that at least 90% of the liquid contains ONE type of primary molecule, as:
NG reserves may locate in embedded underground areas and a significant portion of the reserve is often located off-shore. The off-shore extraction of NG and its conversion in liquified NG has reached a turning point in terms of economic feasibility; in fact, just few years ago, that extraction type was thought to be:
As a result, there are many efforts to excavate and monetize these stranded and offshore reserves with floating facilities where offshore liquefaction of NG is possible. Therefore, the development of floating LNG (FLNG) technology is becoming important.
Natural gas off-shore facilities as FLNG represent a very complex condensate of chemical plant technologies, designed to be installed in limited space conditions on dynamic moving vessels.
Space limitation of floating vessels is indeed a challengeable problem to overcome. Due to this reason, the amount of feed gas that can be reserved for floating liquefaction is restricted. Units for gas pretreatment operation are supposed to occupy about 50% of the available deck space of a floating production facility, although this relies on the impurity level in the feed gas stream. This indicates that FLNG is more suited to feed gas streams including low levels of inert gases and impurities. CO2, hydrogen sulfide, nitrogen, mercury, and acid gases are the main impurities determining the amount of feed gas.
The demand for clean, renewable energy is continuing to increase around the world. Much of that demand is being met with wind and solar power, but these resources are intermittent and therefore require balancing. Presently, developed geothermal resources are not adequate to provide the balancing that will be needed in the future thus attention is turning to supercritical geothermal resources.
Utilizing supercritical fluids, geothermal could play an important role for carbon-zero energy future. These supercritical fluids provide much higher temperatures above 374 °C and pressure points above 22 MPa, providing much higher heat-content and lower density and so have the potential to generate around 10 times more energy than conventional geothermal for the same amount of extracted fluid .
Volcanic geothermal systems are associated with magmatic intrusions in the upper part of the Earth’s crust characterized by increased temperature, specific fluid enthalpy, and convection of groundwater. Conventional exploitation of geothermal fluids from such systems typically produces an average of about 3-5 MW electric power per well with a world total exploitation of geothermal energy in 2018 corresponding to about 14.4GW . Conductive heat transfer from a magmatic intrusion to the surrounding groundwater occurs in the roots of the geothermal system below the depth of typical conventional geothermal wells. Recent modelling suggests that supercritical fluids with temperatures and enthalpies exceeding 400°C and 3000 kJ kg-1, respectively, exist at the boundary between geothermal systems and the magmatic heat source, with such fluids possibly capable of generating up to 30-50 MW of electricity from a single well or ten times more than conventional geothermal wells.
Since 1970, the science had tried to find a solution at the energy crisis, developing new method to use and storage renewable energy.
The United States Department of Energy has expected that the world’s energy consumption will be increased by 20% and that overuse fossil fuels will have a hard impact on climate.
The hardest current global challenge is to use the renewable energy rather than fossil fuels, improving the storage energy efficiency.
One of the most interesting technologies in the energy storage and conversion is the nanostructured materials for their mechanical and electrical properties.
Carbon nanotubes (CNTs) are a kind of nanostructured material with very good electrical and mechanical properties thanks to their dimension and surface properties. Carbon nanotubes were discovered in 1991 as a minor byproduct of fullerene synthesis. The research into CNTs has increased, reducing significantly the cost of this technology and improving the processability and scalability. Nanotubes discovered are of two types: single-wall and multiwall.
In the following, an overview the thermal processes to store energy, in particular the using of Carbon nanotubes in energy field (with a description of this technology and a presentation of the major results obtained by CNTs) are reported.
In order to reduce costs, to improve worker productivity, some companies are driving the development of smart wearables and sensors in industrial environments.
Currently, the safety on work is guaranteed through PPE (personal protective equipment) like safety eyewear and other. The technology upgrades could make the standard do an even better.
Examples of possible Wearable technology that can greatly improve workplace safety are:
In the following, a review based on intelligent clothing, with future developments, are reported.
Energy systems are changing fast. The methods to produce energy and the ways to transmit it are changing. The consumption of electrical energy is growing and its generation is becoming more decentralized, with grid management increasingly complex.
With the objective to overcome the weaknesses of conventional electrical grids, the Smart Grid was introduced. A Smart Grid is an electricity network based on two-way digital communication. This system allows for analysis, monitoring, communication and control with the aim to improve efficiency and reduce energy consumption and cost.
The Smart Grid has the opportunity to move the energy industry into a future more reliability, efficiency, and availability, allowing an improve of environmental health. During this period, it will be critical to carry out technology improvements, study, consumer education and standard regulations to ensure the benefits of the Smart Grid. The advantages of the Smart Grids are:
In the following, a review based on smart grid, with example of installation and future development, are reported.
There is a significant interest for the production of renewable energy. The researchers try every day to find or improve methods to produce green energy. One of the best renewable energy is the solar energy: available every day (though discontinuously).
A new system to capture and use the solar energy is 3PV (printed paper photovoltaics). This technology uses an ink with electrical properties to print on a lot of materials (paper too) an advanced system of solar cell.
The 3PV is developed and study for the first time by the MIT researchers in 2011.
This new technology could be incorporated into clothing, accessories and etc. opening the ways to new method to use the solar energy. The printed cells are flexible so it could be use in documents, windows, wall coverings, etc. adapting its form. Furthermore, this cheap technology could lead to produce new solar system in rural areas, needing reliable source of electricity.
The efficiency of the 3PV started in 2011 with 1%, reaching now about the 20%.
Additionally, the power-to-weight ratio of this technology is among the highest ever achieved: it is more efficient than common photovoltaic cells on glass substrates.
In the following, an overview of 3PV and the major results obtained by this technology until now are reported.
A smart fluid, also called electro rheological fluid, is a liquid suspension of metals or zeolites which solidifies when electric current is applied to it, becoming fluid again when the current is removed.
Smart fluids can be divided in four main classes:
Since 1960, the engineers tried to develop new devices based on ER smart fluids (vibration damper, flow control waves, etc.), without important results. The turning point was there in 1990, after the discovered of MR smart fluid: indeed, in 2002, suspension damping struts of the Cadillac Seville STS model automobile (based on smart fluids) was discovered.
The interest for this kind of technology is considerable and the perspective for a new device based on smart fluids is real.
In the following, a review on smart fluids, with future developments in the close future, is reported.
The source of energy most used in the world is crude oil. Major portions of the crude oils are used as transportation fuels such as diesel, gasoline and jet fuel. However, the crude oil contains sulfur, typically in the form of organic sulfur compounds. The sulfur content and the API gravity are the properties that have more influence on the value of the crude oil. The sulfur content is expressed as a percentage of sulfur by weight and varies from less than 0.1% to greater than 5% depending on the type and source of crude oils.
The removal of organo-sulfur compounds (ORS) from diesel fuel is the key to reduce air pollution, reducing the emission of toxic gases (such as sulfur oxides) and other polluted materials. The adsorption desulfurization process is one of the easily and fast method to remove sulfur from diesel oils.
The adsorptive desulphurization of gasoline over nickel based adsorbent, provide high capacity and selectivity for the adsorptive desulfurization of gasoline. The adsorption involves C-S bond cleavage as evidenced, forming ethyl benzene from benzothiophene in the absence of hydrogen gas.
The hydrodesulfurized straight run gas oil having less than 50 ppm sulfur is treated with activated carbon fiber to attain the ultra-low sulfur gas oil having less than 10 ppm sulfur, for example.
The next paragraphs describe the desulphurization of gasoline with some of the used methods.
According to the 2017 edition of the BP Energy Outlook the world economy will double over the next 20 years with an annual growth of 3,4% drive by China and India. Oil, gas and carbon will account for more than 75% of energy supplies in 2035, despite of the use of renewable resources will increase. In this context gas will overtake coal becoming the second fuels source in 2035 with an annual growth of 1,6 %.Focus on oil demand, it reached 94,4 Mbbl/day in 2015 and it is expected to overtake 100 Mbbl/day in 2021.Therefore oil companies have started to explore new unconventional reservoirs such as tight and heavy oil, shale gas etc. with the aim to increase the production. However these new oilfields are in desert, artic, deep water zones and require specific technologies to be extracted. In last fifty years several accidents occurred such as Exxon Valdez oil spill in 1989or Deepwater Horizon oil spill in 2010. In this scenario robotic technologies can have a key role in increasing safety, efficiency, productivity and minimize risks. Therefore, in the following sections their applications in the oil and gas sectors are described.
The IEA estimated, in the“Medium-Term Oil Market Report 2016”, that oil demand will increase from 94.4Mbbl/day in 2015 up to 101.6 Mbbl/day in 2021 with a mean annual growth of 1.2% dragged by Asia and Middle East.However, in last ten years the cost of productions have increased by about 60%, while oil prices fell down. For example, referring to OPEC oil prices decreased from 109.45 US$/bbl in 2012 to 40.68 US$/bblin 2016. In this scenario digital technologies can have a pivotal role in reducing costs and risks, increase production and efficient of operations. McKinsey&Company, indeed, argued that digital technologies could reduce capital expenditures of about 20%, operating costs of 3-5% in upstream and of about 50% in downstream.Moreover, digitalization could create, in the next ten years, about 1trillion dollars for the sector of which 580-600 billion for upstream, 100 billion for midstream and 260-275 billion for downstream. Furthermore, it could improve productivity by about 10 billion dollars, reduce water usage and emissions by 30 and 430 billion dollars respectively and save 170 billion dollars for customers. Therefore in the following sections, the main digital technologies and the digital oilfield are described.
In the refinery sector, both the fuel and the feedstock market as well as the more stringent environmental regulations are exacerbating the need of maximizing the residue conversion to distillates. In particular, while the distillate fuel demand (gasoline, diesel) is still increasing, the demand of residue fuel oils is about to fall sharply.
Compared with traditional technologies, the present refineries face several challenges because of the presence of crude oils characterized by high content of aromatics, acids, metals and nitrogen, therefore putting more pressure on the hydrocracking and hydrotreating processes that have to handle a low quality feedstock without significant loss of yield or efficiency.
The Hydrocracking (HC) process is able to remove the undesirable aromatic compounds from petroleum stocks producing cleaner fuels and more effective lubricants. In other words, the main application is to upgrade vacuum gas oil alone or blended with other feedstocks (light-cycle oil, deasphalted oil, visbreaker of coker-gas oil) producing intermediate distillates (naphta, jet and diesel fuels), low-sulfur oil and extra-quality FCC feed. HC works by the addition of hydrogen and by promoting the cracking of the heavy fractions in lighter products. With reference to Figure 1, HC globally involves the catalytic cracking (end other micsplitting of a C-C bond) and the addition of hydrogen to the C = C bond (exothermic).
Catalysts are substances used to speed-up chemical reactions or to selectively drive the desired reaction to promote maximum efficiency. They can be homogeneous or heterogeneous, that is they can be in the same aggregation state of one or more reagents or not. Focusing the attention on heterogeneous solid state catalysts, which are largely the most applied, they are generally shaped bodies of various forms, as rings (being Rashig rings the most diffused, refer to Figure 1), spheres, tablets and pellets and their performance is measured according to indices as:
Gas-to-liquids (GTL) is a technology that enables the production of clean-burning diesel fuel, liquid petroleum gas, base oil and naphtha from natural gas. The GTL process transforms natural gas into very clean diesel fuel because products are colorless and odorless hydrocarbons with very low level of impurities.
Much of the world’s natural gas is classified as “stranded,” meaning it is located in a remote area, far from existing pipeline infrastructure. The volumes often are too small to make constructing a large-scale treatment gas plant cost-effective. As a result, the gas is typically re-injected into the reservoir, left in the ground, or flared, which is harmful to the environment. However, the availability of this low cost, stranded gas has incentivized companies to develop innovative technologies that can economically and efficiently utilize this gas converting it into a transportation fuel like diesel and jet fuel.
Refineries can also use GTL to convert some of their gaseous hydrocarbon waste products into valuable fuel oil which can be used to generate income.
Small-scale GTL plants are containerized units comprised of a reformer for synthesis gas production, a Fischer Tropsch (FT) reactor for syncrude production, and, in some cases, an upgrading package, which is used to further refine the FT products into the desired transportable fuel. Since these containerized units already have about 70 percent of their construction complete before reaching the plant site, on-site construction costs are significantly reduced. In cases where capacity needs to be increased, additional units can be easily shipped via truck or ship and connected in parallel to the existing process. Depending on the technology, capacity can range anywhere from 100 barrels per day (bpd) to 15,000 bpd.
Fischer-Tropsch is the process of chemical converting natural gas into liquids (GTL), coal to liquids (CTL), biomass to liquids (BTL) or bitumen from oil sands to liquids (OTL).
All four processes consist of three technological separate sections.
The carbon and hydrogen are initially divided from the methane molecule and reconfigured by steam reforming and/or partial oxidation. The syngas produced, consists primarily of carbon monoxide and hydrogen.
The syngas is processed in Fischer-Tropsch (F-T) reactors of various designs depending on the technology creating a wide range of paraffinic hydrocarbons product (synthetic crude, or syncrude), particularly those with long chain molecules (e.g. those with as many as 100 carbons in the molecule).
The syncrude is refined using conventional refinery cracking processes to produce diesel, naphtha and lube oils for commercial markets. By starting with very long chain molecules the cracking processes can be adjusted to an extent in order to produce more of the products in demand by the market at any given time. In most applications it is the middle distillate diesel fuels and jet fuels that represent the highest-value bulk products with lubricants offering high-margin products for more limited volume markets. In modern plants, F-T GTL unit designs and operations tend to be modulated to achieve desired product distribution and a range of product slates.
Research and development in GTL process and plant involves several part of the plant:
Synthetic fuel production technology, known as GTL, was invented in the 1920s. One of the best-known ways to create synthetic fuel is through Fischer-Tropsch (FT) synthesis. FT technology was initially developed in Germany to solve petroleum shortages leading up to World War. By 1944, Germany was producing 124 Mbpd of synthetic fuels from coal at 25 FT plants.
Next-generation technology was developed in South Africa, which sought to support its economy without oil. In the 1970s, the technology evolved in Western Europe and the US with big plant and large scale production.
Starting from the last decades, advances in GTL technologies have enabled small-scale GTL, and even micro-scale GTL, to be operationally and potentially economically feasible.
Several factors are converging to drive the growth in the GTL industry:
As petroleum prices remain high, new discoveries make natural gas abundant and cheap by comparison, and more advanced energy companies are exploring ways to reduce the CAPEX of synthetic fuel production. As part of this goal, companies are looking into building smaller-scale, modular plants that can operate in remote locations.
Several Gas-to-Liquids (GTL) technologies have emerged over the past three decades as a credible alternative for gas monetisation for gas-producing countries to expand and diversify into the transportation fuel markets. The final GTL product may be syncrude, which can be injected into an oil pipeline, thereby avoiding the need to transport another product to market, or higher-value liquid fuels or chemical feedstocks such as gasoline, diesel (without sulphur and with a high cetane number), naphtha, jet fuel, methanol or di-methyl ether (DME).
At present, five commercial-scale GTL plants are in operation (Fig. 1). These five plants include:
These five plants represent nearly 259 Mbpd of capacity. At 140 Mbpd, Shell’s Pearl GTL complex represents more than 50% of the world’s total commercial-scale GTL capacity.
The first GTL plant was developed by PetroSA in 1992. This 36-Mbpd plant is in Mossel Bay, South Africa. The plant utilizes FT technology to process methane-rich natural gas into high-quality, low-sulfur synthetic fuels. Products include unleaded petrol, kerosene, diesel, propane, distillates, process oil and alcohols.
Shell commissioned its first commercial GTL plant in Bintulu, Malaysia in 1993. The plant’s initial construction cost was $850 MM. The 12.5-Mbpd plant underwent a $50-MM debottlenecking that increased total capacity to 14.7 Mbpd. Since 1993 has produced the following products: liquefied petroleum gas (up to 5%), naphtha (up to 30%), diesel fraction (up to 60%) and paraffin (up to 5-10%).
The Pearl GTL complex is the largest GTL facility in the world. The 140-Mbpd facility is located in Ras Laffan Industrial City, Qatar. The $19-B natural gas processing and GTL integrated complex was developed by a JV of Shell and Qatar Petroleum.
Oryx GTL was the Middle East’s first GTL plant. Developed by Qatar Petroleum and Sasol, the $6-B plant also processes natural gas from Qatar’s North Field. Construction of the facility began in late 2003, and it started production in early 2007. The facility processes 330 Mcfd of methane-rich gas from Qatar’s North field and produces 34 Mbpd of liquids, with the majority being low-sulfur, high-octane GTL diesel.
The latest commercial-scale GTL plant to commence operations is the Escravos GTL plant. The $10-B facility was developed by a JV consisting of Chevron, Sasol and Nigerian National Petroleum Corp. The plant utilizes technology from both JV partners to convert up to 325 MMcfd of natural gas into 33 Mbpd of GTL diesel and GTL naphtha. The plant has been operational since 2014.NEW GTL FACILITIES UNDER DEVELOPMENT
The ENVIA Energy’s GTL plant on the Waste Management landfill in Oklahoma came on line in 2017. The plant, partially fed with landfill gas, announced its first finished, sale able products on June 30 2017, but at January 2018, has not yet reached the 250 bpd design capacity.
The start-up of other 4 plants (Greyrock 1, Juniper GTL, Primus 1 and Primus 2) will happen in 2018. The new owner of Juniper GTL, York Capital, will likely target future plant sizes of more than 5000 bpd (consuming 50 MMscfd of gas). Greyrock and Primus GE announced to continue strong business development efforts in the gas flare arena.
Haldor Topsoe has joined forces with Modular Plant Solutions (MPS) and has designed and engineered a small-scale methanol plant (215 tpd) called “Methanol-To-GoTM”. The size of the plant is similar to the Primus 1 and 2 plants with a gas feed rate of 7 MMscfd.
BgtL is a new player in the micro-GTL arena (20-200 bpd). However, their patented technologies are based on 2 decades of R&D work in research institutes. Their portfolio of products includes plant modules that convert gas volumes as small as 2 Mscfd into a range of products including oil, diesel, methanol and others.
Summarizing, the current leading GTL technology providers with commercial offers are:
Micro-GTL: Unattended operation units below ~1MMscfd and below ~US$ 10mln
Mini-GTL: Small modular plants with some operators and a cost >US$ 10mln
More information on these companies and their projects can be found into the most recent bulletin on GTL technology .In the following figure is reported the forecast furnished by EIA for GTL production in the next few years:
The GTL market is pushing toward small-scale and modular units. These types of plants can be built at greatly reduced capital cost, which can run into the billions of dollars for large-scale facilities.
Gas units, technologies used, size and other functional data for several companies involved in the GTL technology are summarized in the tables below:Calvert Energy Group/OXEON
The Calvert Energy Group offers modular GTL (Flare & Stranded Gas to Diesel plants ranging in size from 0.2 MMscf/d to 100 MMscf/d. The OEXON technology used is exclusively licensed to Calvert Energy Group by OXEON.
CompactGTL’s modular unit offers a small-scale gas-to-liquid (GTL) solution for small- and medium-sized oil field assets where no viable gas monetization option exists so that the associated gas is either flared or reinjected.
Gas Technologies LLC manufactures, installs and operates modular gas-to-liquids plants that utilize the patented GasTechno® single-step GTL conversion process. GasTechno® Mini-GTL® plants convert associated flare gas and stranded natural gas into high-value fuels and chemicals including methanol, ethanol and gasoline/diesel oxygenated fuel blends while serving to reduce greenhouse gas emissions. The unit capital cost of the plants is approximately 70% lower than traditional methanol production facilities and they require relatively limited operation & maintenance costs.
Greyrock Energy was founded in 2006 and is headquartered in Sacramento, California, with offices and a demonstration plant in Toledo, Ohio. Its sole focus is small-scale GTL Fischer-Tropsch plants for Distributed Fuel Production®, and it has a commercial offer of both a fully integrated 2000 bpd plant consuming about 20 MMscfd and smaller “MicroGTL” plants (5 – 50 bpd).
Velocys is a smaller-scale GTL company that provides a bridge connecting stranded and low-value feedstocks, such as associated gas and landfill gas, with markets for premium products, such as renewable diesel, jet fuel and waxes. The company was formed in 2001, a spin-out of Battelle, an independent science and technology organization. In 2008, it merged with Oxford Catalysts, a product of the University of Oxford. Velocys aims to deliver economically compelling conversion solutions. It is traded on the London Stock Exchange, with offices in Houston, Texas; Columbus, Ohio; and Oxford, UK.
Primus Green Energy is based in Hillsborough, New Jersey, USA. The company is backed by Kenon Holdings, a NYSE-listed company with offices in the United Kingdom and Singapore that operates dynamic, primarily growth-oriented, businesses. Primus Green Energy™ has developed Gas-to-Liquids technology that produces high-value liquids such as gasoline, diluents and methanol directly from natural gas or other carbon-rich feed gas.
By taking advantage of new technologies, such as microchannel reactors, to shrink the FT and SMR hardware, GTL plants can be scaled down to provide a cost-effective way to take advantage of smaller gas resources. GTL plants based on the use of microchannel FT reactors can be operated on a distributed basis, with smaller plants located near gas resources and potential markets.
Smaller, modular GTL plants are suitable for use in remote locations. In contrast to conventional GTL plants, they are designed for the economical processing of smaller amounts of gas ranging from 100 million cubic meters (MMcm) to 1,500 MMcm, and they can produce 1,000 bpd–15,000 bpd of liquid fuels. The plants can be scaled to match the size of the resource, expanded as necessary, and potentially integrated with existing facilities on refinery sites.
Smaller-scale GTL operations also pose a lower risk to producers. Since the plants are smaller, construction costs are reduced; and, since the plants are modular, investment can be phased. The construction time is short, at 18–24 months. In addition, because the modules and reactors are designed only once and then manufactured many times, much of the plant can be standardized and shop-fabricated in skid-mounted modules. This reduces the cost and risk associated with building plants in remote locations. In addition, the components can be designed to use standard, off-the-shelf equipment, so there is less strain on supply chains, and the need for onsite construction work is reduced.
Since the FT process also lies at the heart of the biomass-to-liquids (BTL) processes, the same technology can be used to produce high-quality, ultra-clean diesel and jet fuel from waste biomass, including municipal waste. Smaller-scale GTL plants offer advantages at all stages of production: upstream, midstream and downstream .
The small-scale processing of natural gas needs principally new technologies for converting hydrocarbons into liquid chemicals and fuels. There are several possibilities.
The first one is to develop more effective, less complex methods for converting hydrocarbon gases into syngas.
The second is to work out principally different methods for the conversion of natural gas into chemicals without the intermediate stage of syngas production, working on the composition of the used catalysts or either by developing new ones.
With smaller-scale GTL plants, the greatest challenge is to find ways to combine and scale down the size and cost of the reaction hardware while still maintaining sufficient capacity. This, in turn, depends on finding ways to reduce reactor size by enhancing heat-transfer and mass-transfer properties to increase productivity and intensify the syngas-generation and FT processes. The use of microchannel reactors offers a way to achieve these goals.
The technology can be applied to both highly exothermic processes such as FT, and highly endothermic processes such as SMR. Microchannel FT reactors contain thousands of thin process channels filled with FT catalyst, interleaved with water-filled coolant channels. Since the small-diameter channels dissipate heat more quickly than do conventional reactors, more active FT catalysts can be used to significantly accelerate FT reactions, thereby boosting productivity.
In microchannel SMR reactors, the heat-generating combustion and SMR processes take place in adjacent channels. The high heat-transfer properties of the microchannels make the process very efficient (Fig. 4).
The technology was made possible by creating a novel catalyst using cobalt as active metal in a multicomponent composite. Elimination of certain processing stages and production of high-quality, single-liquid product makes INFRA’s GTL solutions economically feasible from small-scale, pre-engineered, standardized, modular (as small as containers), easily deployed and transportable units all the way to large-scale, integrated gas processing plants.
By offering the ability to target supply into global-liquid-fuel-transportation markets GTL plants significantly diversify market opportunity and help to smooth financial returns in volatile conditions where gas markets prices and oil and petroleum product market prices become decoupled.
There are several factors that determine the cash flow and income streams associated with GTL plants. The key factors required for a methodology that analyses the commercial attractiveness of a GTL plant in a multi-year cash flow model include:
Those product prices are in most cases strongly influenced by benchmark crude oil prices. GTL products generally trade in price ranges that reflect prevailing refinery and petrochemical plant crack spreads. Sometimes GTL products trade at small premiums to refinery derived products because of their superior quality (i.e. low sulphur, low aromatics in the case of diesel and gasoline).Aspects to be considered are:
FT technology typically has four components: synthesis gas (syngas) generation, gas purification, FT synthesis and product upgrading. The third stage constitutes a distinctive technology that provided the basis for future technological developments and innovations. The remaining three technologies were well-known before FT invention, and have been developed separately.
The syngas is normally produced via high-temperature gasification in the presence of oxygen and steam.
For the components of the plant, some aspects can be considered for cost analysis:
GTL technologies can transform off gas streams, which would otherwise be flared into valuable liquid transportation fuels and chemicals, including high-quality gasoline or methanol or a separate stream of hydrogen-rich vent gas that can be used as an additional onsite hydrogen or fuel source, so this is an ideal solution for reducing gas flaring while boosting returns.
In addition, greenhouse gas emissions can be further reduced with GTL systems through the input of CO2 streams as co-feed which is converted into gasoline or methanol, representing a valuable use for what is typically considered a low-value or even negative-value gas stream.
Properties of GTL Fuel include the enhanced aquatic and soil biodegradability, lower aquatic and soil ecotoxicity. Fuels produced from the FT process offer significantly better performance than their petroleum-based equivalents. FT-derived diesel does not contain aromatics or sulfur, and it burns cleaner than petroleum-derived fuels, resulting in lower emissions of nitrogen oxide (NOx), sulfur oxide (SOx) and particulates. Exhaust emissions experiments on GTL products revealed an overall significant reduction of CO (22%–25%), hydrocarbons (30%–40%) and NOx (6% to 8%). GTL diesel has the potential to be sold as a premium blendstock.
The combination of these features indicate that GTL Fuel is less likely to cause adverse environmental impacts than clean conventional fuels. In addition, FT diesel can be blended with lower-cetane, lower-quality diesels to achieve commercial diesel environmental specifications.
When the feedstock includes a renewable component, whether renewable biogas (as in the case of the ENVIA Energy project), or forestry and sawmill waste (as in the case of Red Rock Biofuels’ proposed project in Oregon), the fuels produced deliver a significant reduction in lifecycle greenhouse gas (GHG) emissions over conventionally produced fuels.
The presence of sulfur compounds in fuel oils causes concern both during refining processes (due to catalyst deactivation and corrosion) and during the fuel end-use, since the fuel combustion generates the emission of oxides. The main environmental concern from SOx emission is related to respiratory problems.Sulphur oxides (with water) also produces sulphuric acid, the main cause of acid rain and corrosion. Furthermore, when the emissions are in the form of sulphate particles, sulfur also contributes to the formation of particulate matter.
The original content in crude oils (organic in the form of thiols, sulphides, and thiophenic compounds and inorganic such as S, H2S, FeS2) varies from 10-2 to 8 %w (see Figure 1). Globally, the S amount in the distillation fractions increases with an increase in boiling range and the class of aromatics is the most resistant to desulfurization.
Among the 100 mb/day of oil supply, about the 4% is represented by the oil-based marine fuel. Shipping is by far the main pathway of international commerce and its emissions have a worldwide dispersion (also affecting climate). For decades, the ISO has been accepted the limit of 3.5% sulphur for the heavy bunker fuel. To lower the pollution near ports, many governing bodies have established Emission Control Areas (ECA) in which the maximum sulphur(in burned fuels) is limited. The allowable level in these region has been reduced from 1.5 % (2010) to the present 0.1%. On the other hand, the International Maritime Organization (IMO) has planned to lower the sulphur content to 0.5%w from the 2020. Many Chinese ports, including Shenzen and Shangai, are going to implement the IMO compliance of 0.5% sulphur limit. These regulations require a very deep desulfurization to meet the ultra-low sulfur diesel (ULSD)specifications(15 ppm).According to McKinsey & Co., the shipping industry will react by switching to a combination of marine gasoil and low-sulfur residuals […] generating, very attractive investment on sulfur removal technologies.>>
Foster Wheeler examined the impact of the new regulations on a typical refinery concluding that the new targets will be achieved by processing the crudes with the lowest S-content or by increasing the blending with distillates. From the market point of view, particularly if considering the SECA regulations, the distillate production will be under pressure and the new capital costs (upgrading/retrofit) will increase the price of bunker fuels up to the diesel level.On the other hand, novel Desulfurization Projects (50-100 in the next 12 years) will be needed to produce ~200·106 tonnes/year of residue meeting the future specifications. In synthesis, the options available to meet the future environmental standards are:
It is the most common technique, already implemented in any refinery system, and needs hydrogen as a reactant and a catalyst (typically Co-Mo/Al2O3 and Ni-Mo/Al2O3) to convert sulfur compounds into H2S. Typical operating conditions are high temperature (>300) and pressures (>100 bar). Heterocyclic compounds are hardly removed (due to steric-hindered adsorption on catalyst surface) while thiols and sulfides are completely converted into H2S. This latter is subsequently separated from fuel oil sand oxidized into elemental sulfur (Claus Process). HDS can be applied to different streams of the overall refining process: i) Pre-upgrading (e.g. VGO hydro treating); ii) Residue upgrading gas well as iii) Whole Crude hydrotreatment generating directly low-sulphur crudes. These solutions are discussed in the report by Foster & Wheeler that also points out the increase in carbon emission related to the new refinery configurations able to meet these standards.
The overall effectiveness of HDS is limited by: i) metal content of heavy oils; ii) coking and fouling potential; iii) steric hindrance, during both the catalytic reaction and the adsorption.In conclusion, to push forward the HDS in order to meet standards of ULSD means: high pressure and temperature (requiring high capital and operating costs), limited catalyst life and high energy and carbon footprint.
This process, consisting in confining S-compounds onto a solid matrix, depends on the selectivity of the sorbent as well as on the regeneration method. Several sorbent materials have been evaluated for both model oils and distillates: activated carbon, silica-aluminas, zeolites, Gallium+Y-zeolites, Cu-zirconia and metal organic framework. Acceptable desulfurization levels can be achieved under mild conditions from the experimental point of view. On the other hand, the process reliability is still not sufficient for industrial applications. Moreover, heavy oils present large molecules that strongly reduce the adsorption efficiency due to steric hindrance.
This process does not require hydrogen and external energy since it implements microorganisms to remove S atoms from organic compounds. It is still not practicable on industrial scale. Some experimental evidences have been presented in the literature for model matrix6.
Extractive desulfurization does not require hydrogen and can be operated at mild conditions. On the other hand, the system thermodynamics influences the process efficiency since i) the solubility of the compounds in the solvent (acetone, ethanol, polyethylene glycols, etc.) limits the extraction yield, ii) the solvent and the oil should be immiscible to minimize the solvent losses; iii) the viscosity of fluids worsen the mixing, iv) the vapor pressure of the solvent limits the operating conditions; v) the solvent may contain other compounds extracted from the oil. Because of these drawbacks the energy footprint of the solvent regeneration could be very high.
ODS is a viable alternative to HDS since oxidized sulfur compounds can be “easily” removed. The subsequent separation can be achieved by physical methods (e.g. extraction by non-miscible polar solvent followed by gravity, adsorption or centrifugal separation); oxidized sulfur can be also removed by thermal decomposition. Follow by EDS, the oxidation does not mitigate the solvent loss and energy cost (abovementioned solvent regenerating issues) but increases the process selectivity.
The process require oxidant (H2O2 among the best, other represented in the figure below) a catalyst (e.g. acids) and a phase-transfer agent (PTA) when the mass transfer across the aqueous and oil phases represents the rate-limiting step (to enhance the kinetics of the liquid-liquid heterogeneous reaction system).
In factPTAs is able to form a complex with the oxidant in the aqueous phase transporting it across the interface. In synthesis, ODS can be obtained i) in an acidic medium, ii) by an oxidizing agent, iii) by autoxidation, iv) by catalytic oxidation, v) by photochemical oxidation, vii) by ultrasound oxidation.
Several companies and research groups introduced the intensification effect by means of Ultrasounds (US). SulphCo’s patented technology uses Ultrasound to induce cavitation in a water/oil stream. During the Ultrasonic Cavitation (under the influence of the pressure rarefaction), cavities arise from dissolved gases by partial vaporization. Depending on the size of these cavities and the pressure variations, they undergo into a radial motion: the negative pressure induces expansion of the cavity until the attainment of a maximum radius. These vapor bubbles undergo a subsequent compression phase causing the rapid compression. The collapse dynamic is faster than mass and heat transfer (the temperature increasing is comparable with an adiabatic compression with heating rates > 109 K s-1) and leads to high pressures ( >100 bar) and temperatures ( > 5000 K ).;
where Ta is the ambient temperature, Pi is the pressure inside the bubble at its maximum size and Pa is the ambient pressure at the moment of transient collapse. Thanks to these local extreme conditions, the collapsing cavity becomes an “hot spot”, concentrating the energy in very small zones. At the final moment of bubble collapse, wall motion is far more rapid than diffusion dynamics of water vapor: the entrapped molecules dissociates forming radical species.
On this basis, chemical reactions and physical consequences (intense shear, mixing and high localized pressure and temperature)induce and accelerate several chemical processes.
SulphCO® Technology demonstrated the efficient conversion of sulfides and other S species to sulfones (easily removed by downstream separation). Several research groups have tested the US to globally overcome the mass transfer limits and increase the reaction kinetics. Akbari et al. investigated the intensification effect that US produces on the efficiency and the catalyst deactivation during the oxidative desulfurization of model diesel over MoO3/Al2O3.Bolla et al. studied the phenomenology of US-assisted ODS of Liquid Fuels by simulating the bubble dynamics, the involved chemical reactions as well as by observing the combination of oxidizing agents (e.g. Fenton reagent) and ultrasounds.Bhasarkar et al. investigated the contemporary use of ultrasound and PTA for ODS. Good conversion has been observed in the simultaneous desulfurization/denitrification of liquid fuels in sonochemical flow-reactors. Different improvements achieved by the US implementation in industrial desulfurization processes are described by Wu and Ondruschka 2010.
Ionic liquids (ILs)have been implemented for their extraction characteristics in combined EDS/ODS schemes (see Figure 6). ILs consist of organic cations and inorganic anions; they are high boiling solvents and can be tuned to meet the requirement of specific applications. Low viscosity ILs showed remarkable results for their regeneration (by a simple water dilution and vacuum distillation process).
The process efficiency increases with oxidized compounds (sulfoxides and sulfones) but ILS are also able to obtain good removal of heterocyclic S-compounds. The possible reaction patterns, regeneration features as well as future challenges and perspectives have been described by Bhutto et al.
Polymers are widespread in different sectors, from packaging to construction. As shown in Figure 1, polymer production reached about 400 Mton in 2015 and is expected to grow with a CAGR of 3.9% in the period 2015-2020.The production interests mainly the packaging (36%), building and construction (16%) and textiles (15%), while referring to the polymer type, the main ones are: PP (17%), LDPE (16%) and PPA fibers (15%).The leading companies are Dow Chemical, BASF SE, Saudi Basic Industries Corporation, China Petrochemical Corporation, and Exxon Mobil.2 Whereas the main producing countries are China (29%), Europe (19%) and NAFTA (18%).In this scenario among emerging polymers there are self-healing polymers that falls into the class of smart polymers. It is considered that in 2025 these compounds could carry to4.1 billion of US$ with a CAGR of 27.2%. Therefore in the following sections self-healing polymers and their characteristic are described.
Self-healing polymers are materials that have “the capability to repair themselves when they are damaged without the need for detection or repair by manual intervention of any kind.”When cracks begin these lead to the chain cleavage and/or slippage with the formation of reactive groups. These groups can form oxidative products or rearrange themselves to repair the leak.According to the operation mechanism, self-healing can be divided into: extrinsic and intrinsic, automatic and non-automatic. In the first case the damage is repaired by means of an external agent put inside the matrix. The external agent can be liquid (confined into microcapsules, hollow fibers and microvascular networks) or solid (dispersed in a polymeric matrix). Whereas intrinsic ones can repair by themselves. Referring to non-automatic materials, they need of an external stimulus such as light, heat, laser beam, chemical and mechanical to repair the crack. While for the automatic ones the repair is spontaneous.
The cracks are repaired by local increase in the mobility of the polymeric chains. This is possible thanks to the reduction of the material viscosity and using an external/internal stimulus such as thermal energy, irradiation, pH changes, etc. (Figure 3).After cooling, the local properties are restored and the material can be used again. There are several parameters that can be modified to ensure good physical and mechanical properties: such as molecular weight, cluster distribution and size, crystallinity etc.
On the basis of the healing mechanism these compounds can be divided into: polymers based on reversible covalent bonds, supramolecular polymers and shape memory polymers. The first category includes several bonds such as disulphide, imine, acyl hydrazones etc.However, the most common are based on Diels-Alder/retro Diels-Alder reactions. These are called [4+2] cycloaddition reactions because involve 4π electrons of the diene and 2π electrons of the dienophile. The most known and used system is the furan/maleimide due to low healing temperature near to 100 °C (for more details can be consulted A. Gandini). In supramolecular polymers, monomers are held together by means of non-covalent interaction such as hydrogen bonding, π-π stacking interactions, metal ligand complexes and ionomers.9 Compared to covalent bonds, non-covalent ones are weaker but more reversible. The shape memory polymers, instead, are compounds that can be plastically deformed, but by means of external stimuli such as heat, light etc. can return to the original shape. The matrix is usually composed by two domains: one acts as netpoints defining the original shape of polymer and the other one acts as molecular switches having memory of the original shape. A trade-off between mechanical strengths and healing capacity is represented by polymer blends (for more detail can be consulted L. A. Utracki et al.).Extrinsic Self-Healing Polymers
Unlike intrinsic self-heling polymers, the extrinsic ones need of external agent, placed inside the material matrix, to repair the damage. The healing agent can be confined as liquid into capsules or networks such as capillaries and hollow fiber or blended as solid into the polymer. The healing agent is then released due to the rupture of these containers reaching the cracks by means of capillary forces. Microencapsulation and Microvascular network are the most common techniques for making extrinsic self-healing polymers. In the first case the healing agent can be encapsulated by means of the reactions of several mixtures (urea-formaldehyde, melamine-formaldehyde etc.) in an oil-water emulsion(in situ and interfacial techniques) or by the dispersion of the key component in a melted polymer. This compound is emulsified and solidified by changing the temperature or removing the solvent. It is necessary that the healing agent has low viscosity, good wettability and minimum loses due to volatilisation or diffusion into the polymer matrix. Form the first system based on styrene/polysterene blends and phenolic based resin we move on dicyclopentadiene monomer (DCPD) with “Grubbs catalyst” up to the polydimethylsiloxane (PDMS).Referring to vascular networks the most common technique is based on hollow glass tubes with different configuration: all tubes are filled with only one type of resin such as epoxy particles or cyanoacrylate or with two “adhesives” such as epoxy and its curing agent. Otherwise one of the compounds can be injected in the tubes and the other one in microcapsules. However these techniques allow to create 1-2D networks. An emerging method consists of making a scaffold that after solidification is removed from the polymer matrix. This allow to create a 3D structure. The healing agent is then injected in the network.19
The main techniques used to evaluate the healing efficiency are Undamaged Tapered Double Cantilever Beam (TDCB) and Tear Test. In the first case the crack is generated in the center of the sample and is propagate until failure. Then the coupon is repaired by means of healing properties of the material and loaded again. Whereas Tear Test is used for elastomeric material such as PDMS. The rectangular sample has an axial cut and two legs that are loaded until the cracks propagates to the rest of the material. The healing efficiency is worked out comparing the property of virgin sample.Where KIC fracture toughness, PC critical fracture load, T tear strength and FAvg is the mean tearing force.
Among intrinsic self-healing polymer an emerging technique is represented by the injection of a thermoplastic particles (250-425 μm) of polyethylene-co-methacrylicacid (EMAA), into diglycidyl ether of bisphenol A (DGEBA) epoxy resin polymerized with triethyltetramine (TETA). The TDCB test performed at 150°C for 30 minutes showed a healing efficiency of about 85%. This was achieved by the formation of bubble that expanding forcing the healing agent into cracks.
Keller at al. in their first work tested a matrix of Sylgard 184 PDMS provided by Dow Corning in which the healing agent was confined into two different urea-formaldehyde capsules: one containing a vinyl terminated poly-dimethyl siloxane (PDMS) resin and platinum catalyst and the other containing a PDMS copolymer diluted with a 20 wt% of heptane to reduce the resin viscosity. Therefore, polymer and healing agent have the same nature. Tear tests showed a healing efficiency ranging between 70-100%.In a subsequent work the same polymer and the elastomer RTV 630 provided by GE Silicones were tested under torsional fatigue. The experiments involved four samples for each compound with different amount of substance in both capsules. The results showed that torsional stiffness was recovered after 5hours while the fatigue crack was reduced by 24%.
Toohey et al. instead tried to mimic human skin creating a 3D microvascular network covered by an epoxy substrate. The coating contained “Grubbs” catalysts while the structure was filled with DCPD healing agent. Furthermore, an acoustic emission sensor was used to detect the crack events. The concentration of catalyst was increased up to 10 % w/w showing a maximum number of cycle equal to seven. Therefore, to obtain a greater number of cycles this structure was modified by introducing a multiple isolated network structure where different healing agents can be confined. In this way a two part (epoxy resin-amine harder)altering structure was obtained and the number of cycle was increased up to 16.
An exhaustive description of the last advancement in self-healing polymers can be found in Zhag et. al  and Mauldin et. al20.
Self-healing polymers are promising smart materials that try to mimic the nature (i.e. healing a skin wound, broken bone etc.) repairing themselves without an external intervention of any kind (i.e welding, fusion etc.)10. These compounds can be applied in several sectors from packaging up to aerospace, from coating to corrosion prevention22and it is estimated that in 2025 could have a market size of 4.1 billion of US$ with a CAGR of 27.2%.6Nowadays,they are divided into extrinsic and intrinsic, automatic and non-automatic polymer depending on the mechanism of action. Some emerging material are listed from EMAA particles up to 3D microvascular network. However, these works are concerning the laboratory scale and only few products are available. Therefore, more efforts are necessary for the commercialization.
World oil demand is growing steadily. Today it reaches about 100 million b/d. Conventional oil reserves are about 1/3 of the non-conventional ones such as: heavy oil, tight oil, shale gas, methane hydrates etc. These resources are deployed on extensive areas and need of specific technologies to be extracted. Hence nowadays, they are very expensive compared to the conventional ones.,Several Enhanced Oil Recovery Technologies exist (Thermal, Gas and Chemical) but they don’t exceed 40% of recovery. Hence, to increase this percentage isnecessary to better understand the transport of oil and gas into nanopores rocks.Indeed, due to dimension of pores and the rock heterogeneity the flow description with conventionalmathematical modelareno longer suitable.In the following sections the flow in nanopores rocks, the mathematical tools, simulations and experimental studies are described.
The flow through the nanopores rocks takes place within channels less than 100 nm and can’t be described by conventional models. Unlike conventional reservoirs, theunconventional ones, indeed, have worse features of porous bed. The porosityis between 2-6%, the permeability can change quickly from 0.001 μD up to 1 mDand the system is oil wet rock (the contact angle between fluid and rock is more than 90°C). Referringfor example to tight oil, the pore diameter is between 30-200 nm including micro-macro and meso-pores. The reservoir is formed by several zones such as oil+mobile water and gas+oil+immobile water as shown in Figure 1. The oil productionreaches low flow rates in 9-12 months. Therefore, as described in the following sections several techniques have been studied to enhance oil recovery.
The flow depends on Knudsen number5 and due to pore diameter, it isn’t continuum. Therefore, it can’t be described by Darcy law, but slip, transitionand free-molecule flow need to be considered. Boltzmann equation can be solved to describe the flow (Figure 2), but to reduce computational costs it is solved only for simple problems. Hence, several mathematical models are used such as Molecular Dynamics (MD), Direct Simulation Monte Carlo, Burnett equation and reduced order Boltzmann equation (LBM and Grads). Hou et al.has proposed to combine the positive aspects of LBM and MD methods. In this way, MD is suitable to describe the fluid flow near the surfacesofporous media while LBM allows to describe the rest of the flux,saving time by means of simplified kinetics models.
On computational level the porous medium can be simulated in different ways. For example, Unfractured Porous Media can be described by means of:One-Dimensional Models, where pore spaces are considered like a series of capillary tubes in which the radius can be the same for all or not. The model can take into account the tortuosity, but itcan’t describe the interconnectivity of the pores.Continuum Models,where the domainis considered as a distribution of identical spheres. The model can represent anunconsolidated or consolidated porous medium depending on the overlap of the interconnections.Random Hydraulic Conductivity Models,in which domainis divided into rectangles with a random hydraulic conductivity.While, referring to Fractured Porous Media the principlemodels are14:Models of a Single Fracture,where the simplest model is represented by two parallel flat plates. It can be solved analytically, but it isn’t suitable to describe the internal morphology of the fracture; indeed, it doesn’ttake into account the roughness of the fracture. Models of Fracture Networks,in whichfractured rocks are described as a network of interconnected elements. In this way is possible to describe the flow in the fractures by means of 2D and 3D models. Models of Fractured Porous Mediaare suitable for describing flow in matrices with high permeability. These models include double porosity andpermeability models (see for example the model used byFragoso Amaya). In the former the matrix acts as medium storage, while in the latter both matrix and fractures networks contribute to transport and fluid flow.
There are several techniques that allow to improve oil recovery and can be classified into primary, secondary and tertiary recovery,. The former consists of the extraction of oil via natural rise or pumps. It let to recover only 5-15% of hydrocarbons. Secondary recovery, instead, consists of the injection of water/gas in the reservoirs. It let to reach 30% of recovery while Tertiary recoverytries to make the ground more suitable to the extraction of oil. Currently these technologies don’t exceed 40%.Oil recovery from reservoirs, indeed, depends on different factors such as the Mobility Ratio (M) and Capillary Number (Nc).The first represents the oil capacity to move through the pores. If M >1, more fluid needs to be injected to obtain an optimal oil saturation into the pore. While M <1, means that mobility ratio is favourable. This can achieve by reducing viscosity of oil (i.e. with thermal techniques) or by increasing viscosity of displacing fluid (i.e. with chemical techniques). The capillary number, instead, measures the relative weight of viscous forces against interfacial tension. In the following section the main techniques to improve oil recovery are described.Thermal Enhanced Oil Recovery (TEOR)
This technique is applied to heavy crude oil with: API Gravity between 10-20°, reservoirs depth less than 3000 ft, permeability of 500 mDand sand thickness between 30-50 ft. It includes Steam Injectionand In-situ combustion. The firstconsists of the injection of hot steam into the reservoirreducingviscosity of heavy oil and increases the pressure. Steam can be injected periodically (Cyclic steam Injection) or by means of two horizontal wells (Steam assisted gravity drainage, SAGD), where the oil is drained into the lower well by means of gravity. In situ combustionconsists of the injection of dry air or wet air into the reservoir. The combustion of part of the heavy oil (5-10% of the crude oil) generates a combustionfront that flows along the reservoir. This front is sustaining by means of the coke present in the reservoir or in the case of wet air by means of steam produced.,Gas Enhanced Oil Recovery (GEOR)
This technology includes Miscible Gas Injectionand Immiscible Gas Injection. In the former CO2 or N2are used to increase oil recovery. As shown in Figure 3 a) the carbon dioxide is injected at 1200 psi and density 5 lb/gal, it mixes with oil trapped into pores forming a concentrated mixture that goes back to the surface. Then, CO2is removed from the mixture, recompressed and injected again in the reservoir .
The CO2 flooding is also a promising technique for tight oil reservoirs. Indeed, waterflooding could form a film on the pore surface decreasing the recovery. In figure 3 b) is shown the common techniques used in tight oil. The wells move vertical until tight formation and then parallel to reservoir. The gas in injected to fractur the rocks allowing to oil to move into wells. Chemical Enhanced Oil Recovery (CEOR)
In the case of heterogeneous reservoir CEOR is better than GEOR. This technique, indeed, reducesthe interfacial tension, wettability and mobility.It includesPolymer Flooding, Surfactant Flooding and Alkaline Flooding.The formeris used to minimize bypass effects due to capillary forces and to increase water viscosity. Usually, the polymers injected in the reservoir are about the 30% (minimum) of the reservoir pore volume. They can be divided into two categories biopolymer and synthetic polymer. Surfactant, instead, reduces interfacial tension between oil and water and alters wettability, butpart of these substancesis adsorbed onto the rock surface.Alkaline flooding is very efficient in reservoirs with high acid content. Indeed, the alkaline reacts with the acid form a surfactant solution that allows to reduce interfacial tension, emulsification and alters wettability.Combinations of the previous solutions such as Surfactant Polymer Floodingand Alkaline Surfactant Polymer Flooding are often used.Nanoparticles to Enhance Oil Recovery
Nanoparticles are having great attention as emerging technologies to be employedin oil & gas field. These materials, indeed, could be used as sensors to be injected into the wells to understand the property of reservoir (pH, hydrocarbon saturation etc.) or as “smart-fluid” for increasing oil recovery altering wettability (more water-wet), improving mobility ratio and reducing interfacial tension.“Smart fluid” can be divided into three groups: metal oxide (Al2O3, CuO, Fe2O3/Fe3O4 etc.),organic (i.e. carbon nanotubes) and inorganic (i.e. silica).In Figure 4 is represented the structure of nanoparticles used to evaluate the oil recovery of Berea sandstone sample having 17.45 API, air and liquid permeability of 184 mD and 60 mD respectively and a porosity of 20%. The better response is given by a mixture of aluminium oxide and silica oxide at a concentration of 0.05 wt. due to reduction of interfacial tension.
Among them emerging nanoparticlesare represented by carbon nanotubes(CNT). These compounds fall in fullerene category, have good resistance to corrosion. They can be arranged in single or multiple wall made of graphene and the surface is hydrophobic with high slip length.6,34For other applications of nanoparticles in oil and gas industry such as corrosion inhibition, methane release from gas hydrate, etc. it can be consulted Fakoya et al.
In literature there are several simulation studies some of them are summarized in this section.Moraes de Almeida et al.described the fluid flow of water and light crude oil on silica nanoporesby means of Molecular Dynamics. The nanopores were simulated with two hydrophilic terminations (silanol and siloxane rich) and three different scenarioswere considered: water/oil infiltration on empty nanopores and water infiltration on oil filled nanopores and vice versa. For empty nanoporesboth water and oil infiltrated quickly (0.5 ns for oil and 1 ns for water) and the interfacial tension was reduced of about 35% for oil/siloxane terminations. For the other cases water infiltration on water/oil filled wasensuredat10 and 5000 atm respectively while oil infiltration on water filled occurs at 600 atm. Ross et al.studied friction coefficient for the fluid flow of water inside flat graphitic slabs (5 x 5 nm) and inside/outside carbon nano-tubes (5 nm length) varying the characteristics length of the two configurations.Molecular Dynamics model was used considering no-slip conditions at solid-fluid interfaces. In this way was possible to calculate the slip length. Tests showed that friction coefficients depended on the curvature of porous surfaces. In particular, they were higher in presence of convex surfaces and lower for concave ones.Lee et al.treated hydrocarbon recovery from shale gas. They simulated kerogen structure by means of several models (disordered, ordered and composite) based onmolecular and statistical simulation.The recovery depends on interfacial tension and is thermally activated. Particularly the energy barrier is strong for immiscible fluids such as water while it is less for miscible ones such as CO2 and C3H8. Despite carbon dioxide, propane is recovered together with the methane extracted.
Alfarge et al. simulated oil recovery from Bakken formation injected three different miscible gases such asCO2, lean and rich gas. The well was stimulated by means of 5 hydraulic fractures spacing of about 200 ft. The test showed at first high production but then a rapid decline due to reduction of pressure nearby the production well.Three different scenarios were simulated changing the number of cycles from two to ten, the duration of injection from two months to six and the duration of soaking from one month to three.The use of CO2 increased molar diffusivity, while rich gases needed a major soaking period despite lean gases that requiredmore volume to be injected. Prajapati et al.simulated the flow through shale reservoirs. They considered a binary mixture of CH4-CO2 flowing through a kerogen matrix by means of four models: Wilke, Wilke-Bonsaquet, Maxwell-Stefan and Dusty Gas Model. This led to a system of nonlinear equations solved by means of COMSOL Multiphysics. It was demonstrated that Knudsen diffusion and binary molecular diffusion had to be considered, indeed the flux is 10 times higher in Wilke, Maxwell-Stefan rather than Wilke-Bonsaquet, Maxwell-Stefan and Dusty Gas Model.Regarding to pilot tests, in 2010 there were about 1500 EOR (i.e. Carabobo, Grosmontetc.) of which 78% refers to sandstone, 18% to Carbonate and 4% to turbidite and offshore fields. Among EOR technologies thermal and chemical projects are widespread in sandstone while gas and water recovery in the rest.One of the most interesting project concerns Bakken formation one of the biggest oil and gas reservoir in the USA. It is estimated that this geological formation could yields until 40 billion barrels, but only 10% is nowadays recovered due to low permeability (0.0018-0.0036 mD). Therefore from 2008 to 2014 seven pilot tests are performed to improve oil recovery: 2 in Montana and 5 in North Dakota. Several techniques are used: cyclical injection with CO2 and water, flooding with water and enriched natural gas and vertical injection with CO2. Despite ultra-low permeability emerges that injectivity doesn’t be an issue for either gas or water. However, increasing in oil recovery is low. Therefore, new tests need to be performed to understand fractured networks, flow in nanopores rocks and collect more data. This can be achieved by means of cores from vertical and later section subsequently analysed in laboratories. (for more information about pilot tests see).
The most mature and widely used technology is the Underground Gas Storage (UGS). Nowadays, indeed, there are 630 underground gas storages. The gas is injected, from the pipeline to the ground such as depleted oil reservoirs when the demand is low and is used when the demand grows. The storages don’t have 100% efficiency because part of the gas called “cushion gas” remains in the subsurface to maintain pressurized the reservoir. A promising technology is the Carbon Capture Storage (CCS)of CO2 where the gas injected in the subsurface can work as a displacing fluid (see Section Gas Enhanced Oil Recovery) or can be stored. Generally, it is injected at a depth of about 800 m where CO2 is in a liquid or supercritical state. It can be stored by a “cap rock” such as clay rock that is impermeable to CO2 or by capillary forces that block the CO2 in pores.
Nowadays technologies to Enhanced Oil Recovery of unconventional hydrocarbons and Energy Storages exist. The most widespread are TEOR (ThermalEnhanced Technology) and Underground Gas Storage but they don’t achieve high efficiency.
Several mathematical models are used to describe the flow in porous rocks. However, porous media have a chaotic configuration and the equation of transport can be resolved analytically only in few cases. Furthermore, the models are based on simplified hypothesis that allow to describe a specific phenomenon. Therefore, is necessaryto continue investigating the hydrodynamics in nanopores rocks by means of pilot tests (i.e. Carabobo, Grosmont, Bakken etc.) In this way is possible to improve technologies and models that allow to describe the phenomena exhaustively.Among emerging technologies, nanoparticles (i.e. silica, CNT etc.) can be a pivotal role in increasing oil recovery. However, these compounds are tested only on laboratory scales and are very expensive. Therefore, is necessary to reduce the cost of production by having better performances with lower concentration
The world production of chemicals in 2020 will increase of 144 Million of metric tons with a market of 4.650 US$ billion. Automation and Process Control have a pivotal role in industrial plants, indeed, they allow to improve products quality, plants efficiency, the safety and reliability of the processes.
The automatic feedback controls were introduced in 1920-1930s mounted on the controlled equipment. Since then, process control has spread rapidly from the first digital devices at the end of 1950s up to Programmable Logic Controllers (PLCs) and Distributed Control System (DCS) in 1970s.Nowadays networks of computers manipulate thousands of variables, but 85-95% of feedback control loops are based on Proportional-Integrative-Derivative (PID) system developed in 1930s. Furthermore, the flow rates of liquid and gases are controlled by pneumatic valves.The use of advanced controls can increase plant’s profit margin between 10-20% and reduce emissions of about 70%.Therefore, in the following sections PID tuning optimization, APC (Advanced Process Control) and MPC (Model Predictive Control) are described. Finally, an overview on the latest software in process control and “smart control” are discussed.
The Aspen Technology Inc. has defined five levels of maturity for a refinery and chemical plant depending on the control level, from level zero where no process simulation is used up to level four, where several models are reported in a single flowsheet and engineers can make decisions by monitoring key parameters. As can be seen in Figure 1, a plant, usually runs in a safety zone called “comfort zone” away from constrain limits. With PID optimization and APC is possible to reduce from three to ten times the amplitude of oscillations working near constrain limits and increasing productivity and profit margins.
The PID controls are the most common control used into chemical and petrochemical plants due to easy implementation and robustness. (Figure 2)
The controller takes a corrective action depending on the magnitude of the error:
In order to have desirable outputs it can be used a separate controller for each variable (decentralized strategy) or a single controller that manipulate all the variables (centralized strategy)3.It is usually used in parallel form expressed as follow:
Where,= bias (steady-state) value; Kc = controller gain; e(t) = error signal set equal to (set point – present value); τI = integral time or set time; τD = derivative time.
There are several tuning techniques that allow to find proper PID parameters.
These methods were developed since 1940s and nowadays can be divided into two categories: Classical and Artificial Intelligent methods.
Classical methods include: Ziegler and Nichols and Cohen and Coon Methods. Ziegler and Nichols proposed two methods, the first called “step response” that can be applied only to open-loop stable plants. It considerers, indeed, the response of industrial process such as a S-shape without overshoot. As can be seen in Figure 3,the delay time (L) represents the intersection of tangent line at inflection point of the curve and x-axes. While time constant (T) represents the intersection with steady state line. From these values is possible to find PID parameters.
The second method is called “continuous cycling method”. It allows to find the critical frequency of the system by increasing the proportional gain until stability limit. The two parameters that describe the response of the system are KCU(ultimate gain) and PU(ultimate period). In Table 1 are shown the relationship between PID parameters and the two Ziegler and Nichols methods.
Ziegler and Nichols methods are suitable for level control but not for flow, liquid pressure that require a rapid response.In these cases Cohen and Coon methods are used.This method finds three poles: two complex and one real that allow to minimize the integrated error and to have a decay ratio of about ¼.
Artificial Intelligent Methods include dozens of methods. Some of them are described, such as Genetic and Differential Evolution Algorithms. For the other ones such as Simulated Annealing (SA), fuzzy system, Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) etc. it can be consult references, .
Genetic Algorithm starts from a random population of binary strings called chromosomes each of them represent the solutions of the problem. The strings are encoded to real number that defines PID parameters. These values are elaborated by PID controller and the response is evaluated by means of objective function such as MSE (Mean Square Error), IAE (Integral Absolute Error), ISE (Integral Squared Error) etc. The fitness values are subjected to a process of selection, crossover and mutation until best fitness is obtained.
Differential Evolution Algorithm, instead, starts from the initialization of real encoding matrix where rows represent PID parameter while columns represent i-th population vector. Each population is evaluated by PID controller and the result represent the fitness value. Then crossover step that involves target vector (first vector of population)and mutual vector (three random vectors from population are selected) generates a trial vector whom fitness values are elaborated by PID. Finally, fitness values of target vector and trial vector are compared to select the minimum value. In this way the individual of new population is generated. The algorithm stops when new population is completed.
The main limit of feedback control is that corrective action takes place only when output is perturbed from its set point. Therefore, more advanced controls were developed such as PID plus feed forward that allows to intervene before disturbance takes place or cascade composed by two controllers, two sensors and one actuator acting on two processes in series.Despite PID controllers ensure good stability and suppression of the disturbances, process performance optimization fails due to the multivariable nature of it and the complex interactions between controlled variables. Therefore, Advanced Process Control are necessary.
The APC includes all software that allows to control critical variables and predicts quality in real-time such as:
An example of APC implementation is described by Howes et al.6 for a lubrification of oil process. The system consists of 12 manipulated variables, 28 controlled variables and 11 feedforward controls. By means of Pitops software developed by Pi Control the plant has increased rate of production of 5% saving about 1.3 M€. The software, indeed, gives the parameters of the system in 10 minutes basing on historical data without step tests. Other examples are Canada’s Yara Belle PlaineInc. and South Korea’s LG Petrochemical Corp. The first applied APC techniques to a nitric acid plant reducing methane emissions by 25% maintaining high temperature combustion. While the latter, applied to a naphtha cracking, allows to improve yield by 5%, reducing energy consumption in cold side by 8% and saving 100.000 $/y.MPC
The precursor of MPC is represented by LQG (Linear Quadratic Gaussian) developed by Kalman in 1960s, but the first MPC generation appeared in 1970s with IDCOM (developed within ADERSA) and DMC (developed within Shell Oil). Nowadays we have reached the fifth generation where Honeywell, AspenTech and Shell dominate the markets. The MPC is suitable to describe the behaviour of MIMO (Multi-Input, Multi-Output) processes.
As can be seen in Figure 6 a classical plant control provides for different hierarchical levels: a plant wide-optimization, a local economic optimizer and a dynamic constraint control. Usually this is done by several PID controls lead-lag (L/L) blocks and high/low select logic. With MPC, shown in more detail on the right end, this can be achieved with better results by taking an action on the difference between actual and predictive value (residuals).11 ue (residuals).11
Nowadays the main MPC software includes: DMCplus developed by AspenTech, SMOC of Shell Global and RMPCT of Honeywell. The following is a brief description of these, for more details see Lahiri.
DMCplus derives by the fusion of Dynamic Matrix Control (DCM) and Setpoint Multivariable Control Architecture (SCMA). The software is composed by several packages and allows to simulate Finite Impulse Response (FIR), linear Multi-Input, Multi-Output (MIMO) and nonlinear Multi-Input, Single Output (MISO) State Place. Recently AspenTech has introduced the Adaptive Process Control that reduces the possibility to have “flipping” behaviour of the plants due to the difference between the plant and the controller model. This package, indeed, forced the system to work in an optimum area instead of optimum setpoint. In this way, the performances of the plants aren’t compromised. The model, also, is implemented by using historical data of the process and adjusting online the parameters with an adaptive model, saving time.
SMOChas been used in more than 430 applications such as crude distillation, hydrocracking, styrene etc.
It includes several packages:
In the era of digital devices, “smart control” for chemical and petrochemical industry can have a key role in reducing costs, saving materials and increasing production rates. The idea is to create intelligent networks where flowsheet and variables are optimized in real time. Emerson represents the leading company in the sector providing smart solutions both for old and new refinery such as electronic marshalling and HART (Highway Automated Remote Transducer) protocol. The former allows to eliminate cross-wires, to reduce the space occupied and the time for add new I/O interfaces; the HART protocol, instead, matches the characteristics of analog and control system removing repetitive problem and predicting unexpected failures. Several companies have implemented smart controls such as Chevron/PDVSA in Petropiar refinery saving 70 M$ in two years, reducing by 40% cost for pre-commissioning and commissioning, and 60% the losses due to instruments faults. In China, Sinopec launched four pilot plants (Jiujiang, Zhenhai, Maoming, and Yanshan) using advanced control and online optimization. This allowed to increase profits of about 10% (i.e. at Yanshan and at Maomingprofits increased of 25.12 million of CNY and 41.94 million of CNY respectively).30
Process Control is very common in refineries and chemical plants. It was used for the first time in 1920-1930s and today is essential to respect product quality, safety and reliability of the processes. Despite the progress of technologies, 85-95% of feedback control loops are based on PID controllers and the main system controls are dated in 1985. The value of technologies that has reached the end-life and with more than 20 years is about 65US$ billion dollars and 53 US$ billion dollars respectively.33 Therefore, several tuning optimizations such as Artificial Intelligent Methods (Genetic and Differential Evolution Algorithms) together with Advanced Process Control (APC) have been described. Furthermore, some examples of the advantages offered by the implementation of APC have been shown and the latest software for Model Predictive Control (MPC) have been illustrated such as DMCplus, SMOC and RMPCT. In this scenario the “smart control” in chemical and petrochemical plants can have a pivotal role in reducing costs increase profits and create safer plants. The current estimate provides that plant’s profit margin can improve of about 10-20% while emissions can decrease of about 70%.
Energy use grew up from 4.6 Mtoe in 1973 to 13.4 Mtoe in 2012.Total final energy consumption decreased in Europe while it increased in non-OECD countries, reaching a further 1.3% in 2014 (i.e China 3.1% and 4.3% in India).,
Figure 1 shows World Energy consumption for OECD and Non-OECD country from 1990 to 2040. As can be seen from 2010 up to 2040, it will grow of 56% from 524 quadrillion of BTU to 820 quadrillion of BTU. The industrial sector will consume more than 50% of the energy in 2040 and this energy will be produced for 80% from fossil fuel.
In this scenario Chemical and Petrochemical sectors contribute to a large part of the Industry energy consumption (~ 30% including feedstocks). Therefore, in the following section, Best Practice Technologies (BPT) that allow to save energy and reduce CO2 emissions are described.
|Equipment, Steam Distribution and Controls||Measures to increase Energy Efficiency|
|Electric Motors (pump, compressor and fun)||
|Chemical Compounds Production||Measures to increase Energy Efficiency|
The main chemical and petrochemical processes (i.e. steam cracking, ammonia production etc.) use catalysts to enhance the velocity of specific reaction increasing the yield. The IEA in collaboration with International Council of Chemical Association (ICCA) and DECHEMA estimated that improvement of catalysts and related processes could reduce energy consumption of 20-40% in 2050.Recently new processes have been developed to produce these compounds at lower costs:
The International Energy Agency (IEA) in the reporton: “Chemical and Petrochemical: Potential of Best Practice Technology and other measures for improving energy efficiencies” has defined two different indices for Energy Efficiency and CO2 savings.
The former is the ratio between the sum of the minimum energy associated to each process and total energy use by chemical and petrochemical processes (Table 3). The last takes into account only direct emissions excluding that related to electricity, use and waste treatments (Table 4).
The value of both indices is function of the approach used. In both top-down and bottom-up approaches the energy efficiency is the ratio the potential performance of the sector under BTP and the current performance. However, in the top-down approach the BPT values are scaled by a coverage factor set equal to 0.95 for all country. While for bottom-up approach this value is specific for each country. The coverage factor takes into account that not all processes are considered. In the table 3 are shown the results for 57 processes and 66 chemical products. Considering electricity, the improvement potentials reaches 20%.5
|Country||TFEU [PJ/y]||(BPT)T-D [PJ/y]||(BPT)B-U [PJ/y]||(EEIj)T-D [%]||(EEIj)B-U [%]||IT-D [%]||IB-U [%]|
The top-down approach underestimates the improving potential for China and India leading to a negative value. While bottom-up approach leads to coverage factor, for some country, more than 100%. Therefore, both methods have critical elements due to overestimation of the process. Indeed, heat cascading and co-generation are neglected.
Direct CO2 Emissions
|T-D [%]||B-U [%]||T-D [%]||B-U [%]|
Finally, in the figure 3 is shown the energy saving potential with BPT and other options such as co-generation, recycling, energy recovery etc.For chemical and petrochemical sectors, the energy saving potential with BPT amount to 120-150 Mtoe/year and 370-470 MtCO2/year.7
The Chemical and Petrochemical sectors are the largest energy users within industrial sector and they reached 30% of final consumption in 2012. There are several measures to improve energy efficiencies (Table 1 and Table 2) and some of emerging processes are Methanol to Olefin (MTO), Hydrogen Peroxide Propylene Oxide (HPPO) and Gas to Liquid (GTL). The International Energy Agency (IEA) has defined two indices to evaluate the Energy Efficiencies and CO2 potential savings by applying Best Practice Technologies (BPT). This term groups the most advanced technologies economically available at industrial scale. The value of these indices depends on the approach used: top-down or bottom-up. The two methods lead to different results but both in some cases overestimate or underestimate the improvement potential. Therefore, it is necessary to consider more data and associate BPT with co-generation, recycling energy and the use of biomass feedstocks. IEA in collaboration with International Council of Chemical Association (ICCA) and DECHEMA, also, define four pathways to be followed in the future: improve feedstock energy (i.e. production of synthetic gas from several raw material), fuel form gas and coal, New routes to polymer (i.e. saccharification of lignocellulose into bioethanol) and hydrogen production (i.e. from biomass, waste material, improve of water electrolysis etc.).
Catalysts are compounds use for increasing the velocity of a specific reaction by reducing the activation energy. This brings down temperature/pressure of the processes saving fuel. Catalysts can be homogenous or heterogenous depending on the phase involved in the reactions (i.e. heterogenous catalysts usually are solid while the reagents are liquid or gaseous). These substances are not reduced by the reactions, but over time catalytic activity and selectivity decrease due to phenomena such as poisoning, fouling coking, carbon deposition and sintering. Therefore regeneration is necessary. In 2014 the global market for catalysts and catalysts regeneration reached 24.6 Billion of US$and it is estimated to achieve 34.3 Billion of US$ in 2024.
In the process, a depropanizer and diisopropylbenzene (DIPB) column are used. The former allows to remove propane from alkylation reactor effluent while the last separates DIPB from heavy aromatics. A transalkylation reactor, in which DIPB reacts with benzene, is also used to improve the yield of cumene.
Catalyst technologies are used in refining processes such as:
Catalysts allow to reduce the temperature/pressure of the reaction decreasing the amount of fuel, feedstock and expensive materials involved in the processes. Therefore, is crucial to develop new catalysts and optimize the existing ones. When a new catalyst is synthesized, the first step is to select the chemical elements by means of mathematical algorithms and discard thosewho are not suitable. For example, choosing 50 chemical compounds the possible combination are thousands from 1,255 for binary up to 230,300 quaternary combination. Before commercialization, the synthesised catalyst is tested on laboratory scale and then into a pilot plant under different operating conditions. The reactors (fixed, fluidized bed reactors etc.) used in the experimental tests affect the shape and texture of catalysts (pellets, spherical, granular particles etc.). In the following section, for example, the most recent catalysts developed by BASF and Clariant are described:
From the first large-scale plants for the production of sulfuric acid in 1875, catalysts have been a fast diffusion in the industrial processes. In the chemical sectors they are used for the production of several compounds such as xylene, ethylbenzene, cumene and so on. In refining processes, they are used in hydrotreating, catalytic reforming, isomerization synthetic fuels, catalytic dewaxing etc. Nowadays about 80-90%24 of chemical processes adopt catalysts (mainly heterogenous catalysts) and the global market for the production/regeneration reach billions of US$. Therefore,is necessary to develop new catalysts and optimize the selectivity and activity of existing ones by reducing the deactivation processes.
The use of software for the solution of complex problems is dating in 1960s. Since then, computational chemistry grew up quickly by means of increasingly powerful computers:
Since 1990s PC programs have played a key role and nowadays are widespread in petroleum and petrochemical processing. In the following section the basis of computational chemistry and the principles of the main commercial software are described.
Computational Chemistry is part of the chemistry that uses mathematical models to be simulated on the computers:
The methods on which models are based can be divided in: Classical Computational Methods and Computational Quantum Chemistry.
These methods are based on the law of classical mechanics and include:
A combination of Quantum Mechanism and Molecular Mechanism is used to describe reaction in a condensate phase. A small part of the system is treated with Quantum Mechanism that takes into account the new configuration of electrons due to chemical reactions. The rest is treated with Molecular Mechanism that allows to describe the molecular geometry.
Process simulation started in the 1966 when Simulation Science launched the program PROCESS (today PROII) for the simulation of distillation columns. Nowadays is widespread due to the possibility to simulate steady state and dynamic. Steady state is used for equipment design, debottlenecking of plants while dynamic simulations are used to reproduce start-up, shout-down, disturbances, operability etc.
The main software use in Industrial Processes are based on two techniques :
The tear stream approach gives an initial value to the stream; in this way the blocks can be solved sequentially. Then the initial choice is checked by an algorithm, until converge is reached. The method is suitable for steady-state simulation, but it is time-consuming for very complex systems.
Figure 3 - Equation Oriented Approach (9)The combination of the two models (SM & EO) is called Simultaneous Modular Approach.
In this section, the main commercial software are listed:
Since 1960s computational chemistry (classical and quantum) has played a pivotal role in solving complex problems. Nowadays commercial programs are based on two mathematical models: Sequential Modular Approach (SM) and Equation Oriented Approach (EO). The SM is suitable for steady state solution, while the EO for dynamics processes and real-time optimization. There are several software (Aspen Plus, PRO/II, gProms etc.) that can reproduce the main petroleum and petrochemical processes; but despite there are more powerful PC, some simulations are time consuming. Therefore the future challenge is to reduce this time ever more and integrate different modelling components and environments through a standard interface (i.e. CAPEN-OPEN project).
Corrosion is the destructive attack of a metal by chemical or electrochemical reaction with its environment. It’s called “anti-metallurgy” because it tends to bring the metals back to their state of being in nature, mixed with other elements (especially with O2). Deterioration by physical causes is not called corrosion, but erosion, galling, or wear,.There are different types of corrosion: uniform, pitting, crevice, intergranular, galvanic, etc., and are related to different sectors: infrastructure, utilities, production, manufacturing and transportation .Corrosion costs are due to lost production, health, safety and environmental issues. In the USA, referring only on direct costs, corrosion costs grew up from 276 billion US$ in 1998 to 1.1 trillion US$ in 2016.
Table 1 reports the Global Corrosion Costs referring to 2013.
As can be seen, these costs reached 2.5 trillion US$ corresponding to 3.4% of Global Gross Domestic Product. The Nace International Institute has estimated that the application of techniques for preventing corrosion can save 375-875 billion US$ (15-35% of the total cost).
The following sections described the most common types of corrosion in industrial processes such a soil and gas refining and corrosion due to water and in soil. Finally, methods to prevent and monitoring corrosion are described.
Corrosion is widespread in oil and gas refining; indeed, refining processes works at high level of pressure and temperature. In addition, due to harmful fluids, specific corrosions (sulfidic corrosion, naphthenic acid corrosion, sour water corrosion etc.) are related.
The European Commission’s report on “Corrosion Related Accidents in Petroleum Refineries” highlights that the most sensitive equipment, in the 99 refineries analysed, is the distillation unit (23% of failures) followed by hydrotreatments equipment (20%); 17% of failures occurred in the pipeline for transport between units, 4% in tubes of heat exchanger and cooling equipment, 15% took place in storage tanks, whereas the rest involved other equipment component like trays, drums and towers.
Water is very aggressive natural electrolyte for many metals and alloys due to oxygen dissolved. Other elements that affect corrosion are: pH, chloride, Total Dissolved Solids (TDS), hardness and high temperature.
Langelier Saturation Index (LSI) is one of the most common index used to evaluate the water corrosion:where
Soil corrosivity depends on electrical conductivity, oxygen concentration, salts and acids content. It’s common in storage tanks, cables and pipelines. Soil aeration is a well manner to reduce corrosion because the ground has higher rates of evaporation and lower water retention.
As abovementioned, corrosion costs are very high. Therefore, it is necessary to prevent and monitor the corrosion development during equipment operation.
There are several techniques for corrosion measurements and can be divided into Non- Destructive Techniques and Corrosion Monitoring Techniques.
Non- Destructive Techniques are used when it isn’t possible to remove damaged materials and include:
Two or three electrode probes are inserted into the process system. A potential of about 20 mV is applied between the elements and current is measured. This method allows to monitoring general and galvanic corrosion and qualitatively local corrosion like pitting and crevice corrosion. It’s suitable to evaluate corrosion rate in real time.
However, problems of integrating corrosion measurements within DCS exist due to qualitative and not quantitative measurements (28). Therefore, they can’t be used as process variables that can be manipulated. At the same time there isn’t a method that can evaluate all different kinds of corrosion. Recently new multivariable corrosion transmitter and wirelesssystems have been developed, but further efforts are needed to reduce the risks of corrosion.
Corrosion control is a real problem for industrial processes. It covers all sectors and with reference to hazardous plants such as oil refining, it can create serious damage to environments and people (i.e. Sinopec Gas Pipeline Explosion). Several methods for corrosion mitigation (cathodic protection, protective coating etc.) and monitoring (eddy current techniques, corrosion coupons etc.) exist. Despite this, corrosion causes trillion US$ losses. Nowadays these costs are 3-4% of Global Gross Domestic Production. Therefore, is necessary to control corrosion by integrating corrosion transmitters within DCS system (i.e. SmartCET)29 and equipping skilled professionals with the latest generation technologies.
The European Commission since 2007 with the “Strategic Technology Plan” (Set-Plan) promotes the development of new technologies that allow to improve sustainability and efficiency, reducing costs. It can be achieved by coordinating the national research of European Countries and by financing projects.
With Horizon 2020, EU gives the financial instrument to achieve these goals. Part of Horizon 2020 is the Leadership in Enabling and Industrial Technologies (LEIT)that supports the development of nanotechnologies, advanced materials, manufacturing and processing and biotechnology.
In these context, the most promising energy technologies includes:
The scopes of the innovative materials development is to reduce resources and energy consumption. Indeed, artificial photosynthesis could be used to produce energy from the sun without intermediate energy carriers(just a little part of 120 000 TW/year is use for mankind activities); thermoelectric generators could be used to convert waste heat into electricity (i.e. in the USA the amount of waste heat is about 36 TWh/year).
In the following sections, the state of the art and the future trends of these technologies are described.
Artificial photosynthesis mimics the natural photosynthesis where chlorophyll uses sunlight to break down H2O molecules into hydrogen, electrons and oxygen. Hydrogen and electrons convert CO2 into carbohydrates, whereas the oxygen is expelled. In the artificial photosynthesis either oxygen and hydrogen could be produced. By this way, hydrogen could be used to produce energy, or to produce artificial fuels as methanol. The main problem of the process is splitting water molecules; the system need the use of catalysts like: manganese, titanium dioxide and cobalt oxide.
Scientists are studying nanomaterialsand new processesto improve efficiency. Today the artificial photosynthesis devices are not competitive with conventional energies equipment and tests are performed only in laboratory scale.
In the figures below two different devices are shown:
The first system uses sunlight to consume a biofuel (ethanol or methanol) and to generate hydrogen. The anode is a glass covered by a transparent conductor (indium tin oxide or fluorinated tin oxide) formed by a thin layer of nanoparticulate (tin dioxideor titanium dioxide). The electrode is immersed in an aqueous solution of NADH/NAD+. The energy absorbed generates electrons that flow through the cathode (i.e platinum electrode) immersed in the same solution, separated by means of membrane permeable to hydrogen’s proton (H+). Hydrogen or, if oxygen is present, electricity is produced. In the second system, the biofuel is substituted by an oxidant catalyst (IrO2∙nH2O) whereas the NADH solution is substituted by a ruthenium solution. The latter injects electrons on TiO2. These electrons flow through the cathode where hydrogen’s protons are reduced to hydrogen.
Piezoelectric materials are widespread in our life. They are used in cars (fuel injection, airbag, parking sensors) in mobile phones (camera focus), at the hospital (microsurgery) in pressure sensors and transducers. When these materials are subjected to a mechanical stress they generate electric energy proportional to the stress. Vice versa when is applied an electrical field the piezoelectric produce a mechanical energy.
Nowadays piezoelectric materials can be divided into three groups:
Quartz has the highest quality factor (parameter that characterizes the sharpness of electromechanical resonance spectrum) suitable for loss transducers, whereas PZT has the highest electromechanical coupling factor (correspond to the rate of electromechanical transduction) and piezoelectric strain constant (measure the rate of strain due to an external electric field) suitable for high power transducer. PVDF has high voltage constant and mechanical flexibility, so it’s suitable for pressure/sensor applications.
The most used is the lead zirconate titanate (Pb(Zr,Ti)O3) and the challenge is to find new materials because this alloy contains 60% in weight of lead (expensive material).4
In 1989, Stanley Pons and Martin Fleischmann demonstrated, in a small-scale laboratory, high release of heat, without radiation,by electrochemical charging of deuterium into palladium. This is called “cold fusion”. Nowadays cold fusion is included in the class of Low Energy Nuclear Reactions (LENR) and other materials have been found to produce the same effect(lithium and nickel).
Instead of hot fusion, LERN necessities of solid materials and it doesn’t need a high flux of neutrons. The heat released is a function of deuterium concentration into palladium (this phenomenon is observed only if D/Pd> 0.9) hence a property metallurgy needs to be find.4A first nuclear reactor is under construction (ITER project)
The following table shows the main experiments and materials.
Electrochemical loading is mainly based on Pd/alloys with deuterons from heavy water because it is the system used in Fleischmann and Pons experiments. But, Ni/alloys with protons from hydrogen gas, are preferred for gas loading.
One of the most promising experiments is Rossi’s E-Cat reactor. An external heat (electric or fossil) is applied in reaction chamber. The reactions begin when reactor temperature reached 60 °C and produce a large amount of heat (more than the energy input). This energy can be used to heat water and to produce steam. When the reaction is stable the external heat can be turned off and the reactions continue for hours. The first plant (1MWth) was tested in Bologna on October 28th, 2011. It ran for 5.5 hours producing 479 kWe.
It is being tested small E-Cat reactors, 10-20 kW, for domestic market (Rossi’s LeonardoCorporation).
A thermoelectric system uses the Seebeck effect that allows to generate electrical power from a temperature gradient. The system consists of couples of semiconductors n-pconnected electrically in series and thermal in parallel. When a gradient temperature is applied, mobile electrons move from hot side (semiconductor n) to cold side (semiconductor p) where there are free holes. The net charge produces an electrostatic potential.
The efficiency is estimated by means of a dimensionless group (figure of merit):
Therefore, materials should have high Seebeck coefficient and electrical conductivity and small thermal conductivity.
Nowadays, materials used for this application are divided into three groups depending on the temperature:
In the figure is reported the history of thermoelectric materials from 1960 up to now. There are three different regions:
The most useful between these materials is Bi2Te3 but this alloy is toxic for the environmental. For thisreason, alloys ofMg2Si, CoSb3, ZnSb,ZnO have been studied to find a new class of materials.4
These technologies are part of low-carbon energy technologies and are well within European “2050 Energy Strategy”. This strategic plan aims to reduce greenhouse gas emissions by 80-95% compared to 1990 levels, by 2050.
Further R&D efforts need to be made on new materials that could allow their commercialization. Indeed, regarding to artificial photosynthesis innovative materials and low-cost fabrication technique are introduced (i.e.hydrothermal and chemical vapor deposition)7. However, the experimental tests are carried out on laboratory scale. Piezoelectric materials are widespread, but new alloys with less lead content are necessary. LERN’s experiments are difficult to reproduce, control and tests are related to few hours of continues operation. Thermoelectric materials have low efficiencies therefore new alloys are necessary to improve the figure of merit (ZT).
CO2 recycling introduces a shorter path (in terms of time) to close the carbon cycle compared to natural cycles and/or an additional way to store CO2 in materials with a long life-time; in addition, it is a way to store renewable energy sources and/or use an alternative carbon source to fossil fuels. Moreover, CO2 recycling produces valuable products that can be marketed and thus add economic incentives to the reduction of CO2 emissions, while options such as storage only add costs. Carbon capture and recycling (CCR) avoids also the costs associated with transporting CO2.
Recycling of CO2 is therefore a possible contributor, together with other technologies, to a solution for the global issue of GHG emissions, but has only started to be considered in detail in recent years.
The lifetime of the products of CO2 conversion is another important aspect (see figure 1). The IPCC report on CO2 capture and storage selected as a crucial parameter the time lapse between the moment of CO2 conversion into a product and CO2 release back into the atmosphere. A long lifetime of the CO2-based product will fix the molecule for a long time, thus preventing its (re-)release into the atmosphere. Most product lifetimes range between several months and a few years, with the exception of inorganic carbonates and polymers based on organic carbonates that store CO2 from decades to centuries.
CCR can be also viewed as a way to introduce renewable energy into the chemical and energy chain, by storing solar, geothermal, wind, or other energies in chemical form. The resulting chemical facilitates storage and transport of energy, and is particularly important if it is compatible with the existing energy infrastructure and/or can be easily integrated into the existing chemical chain. Therefore, recycling CO2 is an opportunity to limit the use and drawbacks of fossil fuels, while avoiding the high costs (including energy) associated with a change in the current energy and chemical chain. In considering CO2 recycling, the effect is thus not only direct, that is, subtraction of CO2 from emissions, but a combination of direct and indirect effects that amplifies the impact. Finally, CO2 finds utilization when there is a profitable cost/benefit trade-off linked to CO2 (re)using in place of the existing technology, regardless of any considerations linked to capture and storage policies.In the following, the emerging large-scale CO2 conversion routes will be shortly analysed.
Notes: Necessary timeframe for development: 1 More than 10 years → 4 Industrial; Economic Perspectives: 1 Difficult to estimate→4 Available industrial data; External use of energy: 1 Difficult to decrease→4 No need; Volume CO2 (potential): 1 Less than 10 Mt→4 More than 500 Mt; Time of sequestration: 1 Very short→4 Long term; Undesirable impacts on environment (utilization of solvents, utilization or production of toxic or metallic compounds, utilization of scarce resources): 1 Significant→4 Low.
The CO2 recycling by non biological route can be divided in three different sub-routes, that is Inorganic reactions, Organic reactions and Syngas production with further conversions.
Mineral carbonation, that is, the formation of carbonate from naturally occurring minerals such as silicate-rich olivine and serpentine, is an already well-recognized carbon storage option.
Calcium carbonate is a key product, for example of the Solvay process for production of Na2CO3 and NaHCO3, and can be mined as limestone. An extensive market exists also for synthetic or precipitated calcium carbonate for applications in the paper industry, plastics, rubber and paint products, with an estimated global market of more than 15 Mt a−1 
One of the most promising process devoted to convert CO2 from flue gases in bicarbonate is the Skyonic’s patented CO2 mineralization process Skymine, the first for-profit system converting flue gas CO2 into bicarbonate (baking soda) as main commercial product. 25 million US$ have been financed by the US Department of Energy (DoE) in 2010, to support the industrialization of this carbon capture technology that can be retrofitted to existing plant infrastructures.Another project (Calera project) has also been selected in the same 2010 funding act (DoE share 20 million US$), and focuses on the production of mineral end-products as building materials, such as carbonate-containing aggregates or supplementary cement-like materials. Inspired by the biogenesis of coral reef, the heart of the technology coarsely consisted of precipitating captured CO2 as novel (meta)-stable carbonates and bicarbonates with magnesium- and calcium-rich brines; the CO2 would originate from captured flue gas—from fuel combustion or other large plants—and the brine from seawater or alkaline industrial waste sources
The synthetic routes from CO2 to organic compounds that contain three or more carbon atoms number in the tens, as extensively reviewed,,, but only five are earmarked as industrialized. Figure2 is an overview of some of the possible organic chemicals produced from CO2. Among these one, the most important are Urea, Acrylates, Lactones, carboxylic acids, Isocyanates, Polycarbonate via monomeric cyclic carbonate, Alternating polyolefin carbonate polymers, Polyhydroxyalkanoate, Polyether carbonate polyols and Chlorinated polypropylene.
The chemical reduction of thermodynamically stable CO2 to low-molecular-weight organic chemicals requires high-chemical-potential reducing agents such as H2, CH4, electrons, and others. The hydrogenation of CO2 can be connected to the well-established portfolio of chemicals synthesized from syngas (CO/H2) via the reverse water–gas shift (RWGS) reaction, where methanol, formic acid, and hydrocarbons emerge as the three main products of interest (see figure 3).
Methanol is one of the chemicals with the largest potential to convert very large volumes of CO2 into a valuable feedstock. It is already a commodity chemical, manufactured on a large scale (40 Mt in 2007) mainly as a feedstock for the chemical industry towards chemicals such as formaldehyde, methyl tert-butyl ether (MTBE), and acetic acid, which makes CH3OH a preferable alternative to the Fischer–Tropsch (FT) reaction, due to the broader range of chemicals/products, and hence their application fields as well as higher productivity.
An alternate source of reducing hydrogen can be methane. The complete hydrogenation of CO2 to methane is the Sabatier reaction:
In terms of hydrogen consumption, and hence overall energetics, CO2 reduction to methanol rather than to methane might appear favorable given the better ratio by energy value of the product relative to the starting H2; nevertheless, specific conditions (for example, the need to produce substituted natural gas; SNG), know-how, and other local conditions have spurred industrial applications of the Sabatier reaction.
In terms of CO2 consumption, a total of 1.8 tons of CO2 is needed to produce 1 ton of algal biomass. Microalgae need also nitrogen and phosphorus nutrients. The integration of chemicals and energy production in large scale industrial algal biofarms has led to the “algal biorefinery” concept. The chemical products of the biorefinery include carbohydrate and protein extracts, fine organic chemicals (e.g., carotenoids, chlorophyll, fatty acids) for food supplements and nutrients, pharmaceuticals, pigments, cosmetics, and others, along with energy fuels, for example, biodiesel, bioethanol, and biomethane. The biochemical conversion finalized exclusively to energy (e.g., anaerobic digestion, alcoholic fermentation, photobiological hydrogen production) has recently been reviewed by Brennan and Owende.
Thus, even if current stage of development in algal carbon capture at large emitter sites indicates an economic cost that is still too high, there are signals of a fast scientific and technological development in this area, including improvements in:
Another interesting technology is “The power-to-gas technology” which is being explored mainly with a focus to store renewable energy, and project developers so far tend to use CO2 from biogas as carbon source for methanation and hydrogen may also be directly mixed with biogas (see figure 4). Although these plants might provide very useful insights into the options of CO2 capture, methanation, and hydrogen storage, biogas as a carbon source may prove only sustainable if derived from (wet) waste and sewage.
In the same field is active the INPEX society with an interesting research which involves injecting CO2 into the ground by using CCS or CO2 Enhanced Oil Recovery (EOR) for the purpose of producing methane by microbes that live in oil and gas fields and water bearing strata (see figure 5). A constant supply of hydrogen is vital to microbes survival. INPEX has performed indoor experiments that use the power of electrochemical hydrogen reduction stage. The research has confirmed electrochemical methane production activation by microbes, including the microbes that lives in oil field in Japan.
To conclude, optimistically, assuming that all the options for CO2 utilization can be fully implemented and considering that the use of CO2 as carbon source partly prevents the use of fossil fuels and incorporates renewable energy into the chemicals and energy chain (and thus has a more widespread impact than only on GHG emissions), a potential reduction equivalent of 250–350 Mt a−1 can be estimated in the short- to medium-term. This amount represents about 10 % of the total reduction required globally, that is, it is comparable to the expected impact of carbon capture and storage technologies, but with additional benefit in terms of (i) fossil fuel savings; (ii) additional energy savings; (iii) accelerating the introduction of renewable energy into the chemicals and energy chain.
Electrocoagulation (EC) combines conventional treatment as coagulation and flotation with electrochemistry. The process destabilizes soluble organic pollutants and emulsified oils from aqueous media by introducing highly charged species that neutralize the electrostatic charges on particles and oil/emulsions droplets to facilitate agglomeration/coagulation (and the following separation from the aqueous phase). In comparison with conventional coagulation processes, the smallest charged particles have a greater probability of being coagulated because of the electric field that sets them in motion. Moreover, an “electrocogulated” flock tends to contain less bound water, is more shear resistant and is more readily filterable.
EC has been known since 1909 (Aluminium/ iron-based electrocoagulation patent by A.E. Dietrich). Is has been (most commonly) used in the oil & gas, construction, and mining industries to separate emulsified oil, petroleum hydrocarbons, suspended solids, heavy metals from effluents. Particularly in the Oil & Gas sector, the EC is fundamental to treat and reuse (on-site) the water needed for the drilling and fracking processes, minimizing the impact of injection wells. The application market has not yet exploded due to the high costs but changes in regulations and growth in the cited industrial sectors has recently brought electrocoagulation to the forefront.
Where z is the valence of ions of the electrode material, M is molar mass of the metal and F is Faraday’s constant (96485 C/mol). Coagulation is brought about by the reduction of the net surface charge; the colloidal particles (previously stabilized by electrostatic repulsion) can approach closely enough for Van Der Waals forces to aggregate. The reduction of the surface charge is a consequence of the decrease of the repulsive potential of the electrical double layer by the presence of an electrolyte having opposite charge (Fig. 1).
The classical representation of EC dissolution with the induced separation mechanisms (coagulation, flocculation and flotation) is reported in Fig. 2. The following main reactions take place during EC.
The metals and other contaminants, suspended solids and emulsified oils are entrained within the floc because of the neutralization of surface charges (destabilization). Destabilization also occurs by “sweep flocculation”, where impurities are trapped and removed in the amorphous hydroxide precipitate produced. Microbubbles (mainly of H2 and O2) adhere to agglomerates helping to separate and lift the flocs up to the surface. Depending on the application, the final solids separation step can be done using settling tanks, media filtration, ultrafiltration, and other methods.
Ferrous iron may be oxidized to Fe3+ by oxygen or anode oxidation and the formation of active chlorine species can enhance the performances of the EC. Both Fe and Al ions complexes with OH− ions. The formation of these complexes depends strongly on the pH of the solution, as shown in Fig. 3: above pH 9, Al(OH)4− and Fe(OH)4− are the dominant species. Anions, such as sulphate or fluoride, affect the composition of hydroxides because they can participate to side reactions and replace hydroxide ions in the precipitates. Temperature affects floc formation, reaction rates and conductivity. The pollutants’ concentration affects the removal efficiency because coagulation follows pseudo second or first-order kinetics. In fact, Ezechi et al., showed a second order kinetic of boron adsorption onto Fe(OH)3 in EC. This work reported a removal efficiency of almost 97% using iron plate electrode (inter-electrode distance of 0.5 cm, 15 mg/l concentration of boron in produced water, pH 7.84, current density of 12.5 mA/cm2) .
This application does not work properly in case of low conductivity (i.e. less than 300 μS/cm), low suspended solids (turbidity less than 25 NTU or TSS less than 20 mg/L), non polar and monovalent contaminates (aqueous salts of Na, K, Cl, F, etc.), non polar and charged particles.
The literature reports many application of EC to water treatment & reuse. Among them, the treatment of oily waste water and produced water is relevant for the Oil & Gas sector. Produced water (PW) is the water trapped in the reservoir rock subsists under high pressures and temperatures and brought up along with oil or gas during production. Other components are the salts in relation to the source (seawater and groundwater) as well as dispersed hydrocarbons, dissolved hydrocarbons, dissolved gases (such as H2S and CO2), bacteria and other organisms, and dispersed solid particles. PW may also include chemical additives (corrosion inhibitors, oxygen scavengers, scale inhibitors, as emulsion breakers and clarifiers, flocculants and solvents) used in pre-treating, in drilling and generally in producing operations as well as in the downstream oil/water separation process. These chemicals affect the oil/water partition coefficient, toxicity, bioavailability, and biodegradability.
PW is considered an industrial waste and its disposal to surface waters or its evaporation in ponds is subject to stringent environmental regulations. It should be treated and reinjected for pressure maintenance, replacing aquifer water or should be reused for irrigation or as industrial process water. Many companies propose their own EC systems (Watertectonics, F&T Water Solutions, Bosque Systems, etc.) for the treatment of PW. The conference proceedings of IDA  gives some interesting examples of EC pretreatment for water reuse in the Oil and Gas Industry. A thypical process scheme, taken from a pilot plant presented in the conference7, is reported in Fig. 4.
Eames reports the case study of the Oil Field in Colombia Meta Province is provided with EC/DAF/UF/RO for the wastewater reuse (3,000 BPD of water for Agricultural Irrigation and Surface; Irrigation (<60 ppm sodium) with the characteristics in Table below. Piemonte et al. also proposed the process analysis with energy and material balances of a produced water treatment train including Vibratory Shear Enhanced Processing (VSEP) membrane system (secondary treatment) and RO destined to the tertiary treatment to achieve the quality needed for water reuse.
Worldwide economic growth continues to drive demand for transportation fuels, and in part
There are several processes presently able to meet individual refinery needs and project objectives. In particular, UOP LLC Company is one of the most active society in this field. The basic flow schemes considered by UOP are single-stage or two-stage design. UOP two-stage Unicracking process flow schemes can be a separate hydrotreat or a two-stage process as shown in Figure 1. In the separate hydrotreat flow scheme the first stage provides only hydrotreating while in the two-stage process the first stage provides hydrotreating and partial conversion of the feed. The second-stage provides the remaining conversion of recycled oil so that overall high conversion from the unit is achieved. These flow schemes offer several advantages in processing heavier and highly contaminated feeds. Two-stage flow schemes are economical when the throughput of the unit is relatively high.
The design of hydrocracking catalyst changes depending upon the type of flow scheme employed. The hydrocracking catalyst needs to function within the reaction environment and severity created by the flow scheme that is chosen.
During the early years of hydrocracking, refiners were mainly interested in maximizing production of naphtha for reforming to high octane gasoline. However with advancements in hydrocracking catalyst technology, and the demand for maximizing distillate yields from heavier feedstocks, two-stage design offers a cost-effective option for a larger capacity maximum distillate unit operation.
A major difference between the first and second stage hydrocracking reactor reaction environments lies in the very low concentrations of ammonia and hydrogen sulfide in the second-stage (see figure 2). The first-stage reaction environment is rich in both ammonia and hydrogen sulfide generated by hydrodenitrogenation and hydrodesulfurization of the feed. This significantly impacts reaction rates, particularly cracking reaction rates, leading to different product selectivity and catalyst activity between the two-stages. The catalyst system can be optimized to obtain a highly distillate selective overall yield structure. Optimum severity can be set for each stage to achieve catalyst life target with minimum catalyst volume. Overall, the two-stage design allows optimization of conversion severity between the two stages, maximizing overall distillate selectivity. New advances in the two-stage Unicracking process design include several innovations in each reaction section of the design. The pretreating section uses a high activity pretreating catalyst that allows hydrotreating at a higher severity, providing good quality feed for the first-stage hydrocracking section and enabling maximum first-stage selectivity to high quality distillate. The second-stage is optimized by use of second-stage hydrocracking catalyst that is specifically designed to take advantage of the cleaner reaction environment. The second-stage catalyst is designed so that the cracking and metal functions are balanced. At the same time the second-stage hydrocracking severity is optimized so that maximum distillate selectivity is obtained from the second-stage of hydrocracking.
Designing catalysts which can be successfully used for processing heavy feeds requires an understanding of the interactions of many factors. Detailed knowledge is increasingly important for controlling reaction pathways to achieve specific product types to meet today’s market demands. The key considerations for optimal catalyst design require good understanding of the molecular transformations of feed to product with respect to catalyst functions and process variables.
Such consideration involves process severity and its impact on the extent of secondary cracking in the hydrocracking reactor. The key steps in the mechanism of hydrocracking paraffins consists of a sequence of steps beginning with dehydrogenation at metal sites to form olefinic intermediates which are then protonated at the acid sites to form the reactive carbenium ions. These, in turn, can isomerize and leave the catalyst surface without cracking after picking up a hydride ion at the metal sites. Alternatively, they can crack to form smaller alkanes which then leave the catalyst surface as hydrocracked products. This process of isomerization and cracking to primary cracked products is referred to as “ideal cracking” and therefore it does not involve secondary cracking of the initially formed product. Secondary cracking often results in the formation of light ends which are of low value to a unit operating to make liquid transportation fuels.
Control of this sequence of steps to stop the reactions after formation of primary products is accomplished by careful selection of catalyst properties such as the strength and distribution of acid sites and tailoring the hydrogenation function to fit the acidity on the catalyst. In addition, particularly when heavy feedstocks are being processed, elimination of diffusion constraints which contribute to secondary cracking is accomplished by strict control of pore size and pore geometry of the catalyst to match the molecular dimensions of a given feed. These catalyst properties must also be matched to the service environment in which the catalyst is intended to function, including the recycle gas composition and the reactor pressure. Thus, detailed knowledge of molecular types and size in the feed is incorporated into catalyst selection criteria in order to make critical determination of the appropriate catalytic components to match feed for a given unit.
Hydrocracking catalysts are typically dual function catalysts, containing an acid-function for cracking and a metal-function for hydrogenation. As shown in Figure 3, a good hydrocracking catalyst, amorphous or zeolitic, is designed to balance these two functions for optimum performance. In the figure two arrows indicate the type of functions (acid and metal) and the height of the arrows indicates the strength of the individual functions. A catalyst with proper balance of these two functions performs optimally in terms of desired product selectivity and catalyst temperature activity/stability. However if a catalyst, designed for the first-stage sour reaction environment typical of first-stage operation, is put in the cleaner reaction environment of the second-stage, a significant boost in the cracking function is observed while the performance of the metal-function remains basically unchanged. Thus, the catalyst that was in good balance for the first-stage environment becomes unbalanced for the second-stage environment resulting in sub-optimal performance. This difference is exacerbated, as the temperature required to achieve desired conversion is reduced. On the other hand the reduction in temperature reduces the metal functionality thus reducing hydrogenation. Therefore, for ideal second-stage catalyst, it is desired that the acidity of the cracking material is weak with a stronger metal-function so that even though the catalyst may appear imbalanced for the first stage sour environment it will be in balance in the second-stage reaction environment. Applying this design approach, UOP recently developed a new second-stage catalyst achieving higher distillate selectivity than the current UOP standard design.
Enhanced two-stage performance is achieved by optimized first- and second-stage conversion severity and application of the new second-stage catalyst. This results in significantly improved overall C5+ yields and a product slate which is more selective to a high quality heavy diesel product.
The enhanced two-stage design has improved distillate selectivity and the product slate is diesel selective with lower light-end production resulting in 7-10% lower hydrogen consumption. The product qualities are similar or better. The improved performance is achieved by optimum processing severity and use of new second-stage hydrocracking catalyst.
The growing concerns about climate change as well as the management of ever increasing liquid and solid wasters highly pushed the R&D in waste-to-fuel conversion. The transformation of wastes into fuels can be realized by the different processes represented in Fig. 1 (extending the classification of 2nd generation of biofuels). The direct incineration of waste enables the highest recovery of the energy content from the thermodynamic point of view. On the other hand, depending on the composition, the emissions of the combustion process can be characterized by the presence of pollutants such as HCl, HF, NOx, SO2, VOCs, PCDD/F, PCBs and heavy metals.
Besides incineration, other thermochemical processes (see Fig.1) such as pyrolysis, gasification and plasma-based technologies, have been developed to selected waste streams. In general, thermal treatments of biomasses (and wastes) allow to get a wide spectrum of fuels (gaseous, liquid and solid) and many chemicals as co-product; the specific treatment is chosen according to the final fuel & chemicals products. Many companies are using municipal solid waste (MSW) thermochemical conversion methods: Hitachi Metals Environmental Systems, Ebara/Alstom, Enerkem, Foster Wheeler, Nippon Steel, PKA, SVZ, etc. The first industrial-scale MSW to biofuel facility opened in Edmonton on 2014 by Enerkem converts 100000 t/year of municipal waste into chemicals and biofuels and is able to divert 90% of the residential waste from landfills.
The multiple synthetic conversion routes of major biofuels produced (Biofuel Flow) from first and second-generation biomass feedstock is represented in Fig 2. Conversion through biochemical and physiochemical processes is playing and important role in the recent biorefineries. These, following the paradigm of zero-waste and zero-emission, allow the extraction of valuable substances processing biomass into a spectrum of marketable products and energy and are expected to play a fundamental role in the future low carbon economy. Moreover, biorefineries would be very attractive from an employment creation perspective, resulting in significantly more jobs per unit of biomass feedstock than conventional processes. A brief review of the processes and technologies cited in Fig. 1 is given in the following.
The pyrolysis occurs without oxygen at atmospheric pressures in a temperature range of 250-900°C. Generally, high vapour residence time favours char production (at lower process temperature) and gas yield (higher temperatures), whereas moderate and short vapour residence times favour the liquid production. In fast pyrolysis, the heating occurs at a moderate temperature (400-550 °C) with very high heating velocities (100°C/s). A successive rapid quenching is required to condense the vapors, to minimize secondary reactions and coalescence or agglomeration (aerosols formation). The heat duty can be recovered from the combustion of part of the produced syngas. The liquefaction by pyrolysis of solid wastes has been widely reviewed in the last years, due to the increasing interest in integrated technologies to derive fuels and chemicals from solid wastes. A review on process conditions for optimum bio-oil yield in hydrothermal liquefaction of biomass is given by Akhtar and Amin.
Municipal plastic wastes, through the cracking and pyrolysis, can produces bio-oil of a good quality, a valid option to plastic recycling or direct combustion. For example, Sharma et al. 2014 repots a study of high-density polyethylene grocery bags pyrolysis to produce alternative diesel fuels or blend components for the petroleum diesel (saturated aliphatic paraffins) of very good quality (with cetane number and lubricity). Many examples of pyrolysis plants are located in Japan. Mogami Kiko owns a Pyrolysis plant (Capacity of 200 kg/h) that produces 80-100 Nm3/h of gas with LHV of 5000-6000 kcal/Nm3 30-40 kg/h of tar and 20-30 kg/h of char, processing several kinds of plastic in a rotary kiln. Environment System have implemented the pyrolysis of thermoplastics waste (without chlorine) in a tank reactor with continuous feeding of scrap film (extruder). Toshiba implemented the continuous feeding of thermoplastics waste (no chlorine, 40 tons/day) into a rotary kiln (externally heated) producing liquid and gaseous hydrocarbons and 4MW cogeneration. Samshiro et al. described the fuel oil production from MSW in sequential pyrolysis and catalytic reforming reactors. Wong et al. , report alternative solutions to solid waste pyrolysis as fluidized bed and supercritical water. Although microwave-assisted pyrolysis is another possible solution to the problem, especially in the treatment of commingled plastic waste, this relatively new concept requires more feasibility studies.
Gasification, operating at high temperatures (>700 °C) without combustion results into solid and gaseous products.. Although associated with lower power production and higher complexity, the gasification of solid wastes can count about a hundred of operating plants having a capacity in the range 10–250∙103 t/y and represents a valid alternative in the field of waste management. Moreover, gasification-based technologies enable the reduction of waste amount to disposal in comparison to the conventional combustion-based WtE units and allows alternative strategies for the syngas utilization. Therefore, gasification of waste has been exploited as alternative to combustion for the waste to energy (WtE) processes in order to improve the performances and the distributed WtE policy.
By using multiple high-temperature processes, including the breaking down of organics through plasma arcs, enables the production of a mixture of hydrogen and carbon monoxide. In this way, metals and other inorganic materials in garbage can be isolated and recycled; the combination of high temperatures and an oxygen-poor environment prevents the production of dioxins and furans; eventually the syngas can either be directly burned in gas turbines to produce electricity, or it can be converted into other fuels, including gasoline and ethanol. Enea reported several experimental campaign conducted at lab and pilot-scale devices. Molino et al. investigated the steam gasification of scrap tires as a sustainable and cost-effective alternative to tire landfill disposal; steam activation of the char derived from the tire residues of the gasification process was carried out at constant temperature and feeding ratio between gasifying agent and char, using different activation times (180 and 300 min).
These methods are based on the separation of useful chemical compounds with physicochemical extraction such as cold press extraction, supercritical fluid extraction, and microwave extraction. In the recent years, cavitation assisted (e.g. ultrasound assisted) extraction process has been utilized for the biomass pretreatment, delignification and hydrolysis, extraction of oil, fermentation and synthesis of bioalcohol. Transesterification of plant or algal oil is a standardized process by which triglycerides are reacted with methanol in the presence of a catalyst to deliver fatty acid methyl esters (FAME) and glycerol. The extracted vegetable oils or animal fats are esters of saturated and unsaturated monocarboxylic acids with the trihydric alcohol glyceride (triglycerides) which can react with alcohol in the presence of a catalyst, a process known as transesterification (according to the following simplified scheme of reactions).
The simplified process scheme is given in Fig. 3. From an economic point of view, the production of biodiesel has proven to be very feedstock-sensitive. Leung et al., report a review on biodiesel production using catalysed transesterification. Waste vegetable oil (WVO) can also be converted after refinement. It has a low sulphur content and it is not associated to change in the land use. The utilization of waste cooking oils is explained in details in the review of Kulkarni et al.
In general, the conversion of biodegradable waste or energy crops, through anaerobic digestion, produces a gaseous fuel called biogas (mainly methane and carbon dioxide). In similarity, the wastes in landfill generates gases (landfill gases, LFG) that can represent a source of renewable energy. Some examples of commercial conversion processes (typically run via anaerobic digestion or fermentation by anaerobes) are reported in the table below (extracted from).
Microbial hydrogen production using anaerobic fermentative bacteria is considered a cost effective technology because the process can use waste materials or wastewaters. The biological production pathway of hydrogen and methane (by microorganisms) can be divided into two main categories: by photosynthetic bacteria under anaerobic or semi-anaerobic light conditions, and by chemotrophic anaerobic bacteria. During the process, organic matter is converted to volatile fatty acids through hydrolysis and acidogenesis (acidogenic fermentation or dark fermentation). This latter produces fuel gas with higher rates. Hydrogen yields from various crop substrates is reported by Mei Guo et al. Kurniawan et al. reported a study on acid fermentation combined with post-denitrification for the treatment of primary sludge.
Since 1980 US Department of Energy supported the Aquatic Species Program (ASP) to exploit algae as fuels (mainly oil from microalgae). The ASP firstly worked on growing algae in open ponds and on studying the the impacts of different nutrient and CO2 concentrations. The program ended in 1995 due to financial issues. In recent years, the energy security risks and the advancements in biotechnology (the ability to genetically engineer algae to produce more oils and convert solar energy more efficiently), has rebirth the R&D in this field. Although the issue of low oil productivity per acre, the cultivation of oleaginous microorganisms (microalgae) can contribute to the biofuel production and to the mitigation of carbon emissions. In this files, further improvements are also needed in the downstream processes and the light supply systems.
Waste heat recovery is a process that involves capturing of heat exhausted by an existing industrial process for other heating applications, including power generation. Technavio forecasted the global waste heat recovery market in oil and gas industry to grow at a CAGR of 7.6% during the period 2014-2019. The sources of waste heat mainly include discharge of hot combustion and process gases into the atmosphere (e.g. melting furnaces, cement kilns, incinerators), cooling water sand conductive, convective, and radiative losses from equipment and from heated products. To design the waste heat reclamation unit, it is necessary to characterize the stream in terms of availability, temperature, pressure and presence of contaminants such as particulate and corrosive gases. There are two main goals of recovering waste heat from industries: thermal energy recovery (both internally and outside from the plant) and electrical power generation. Fath & Hashem compared these two solution for the recovery of waste heat in an oil refinery plant located at Bagdad, Iraq. For the overall energy system efficiency, it is nowadays fundamental to improve the utilization of low-temperature heat streams, primarily for thermal applications like heating, ventilation, cooling, greenhouses, etc. Oda & Hashem investigated in 1990 the selection of different strategies (air conditioning, food industry and agricultural uses) for an industrial area around including a refinery. Nonetheless, also for low temperature sources, some innovations have been proposed in order to produce electricity for standalone plants and/or exploiting the resources that cannot be properly used for direct thermal applications. In the following, all these aspects are faced and some from the most recent and interest development in the R&D are reported.
The main utilizations in the industrial systems are the preheating of combustion air and load or the steam generation. Transfer to liquid or gaseous process streams is also common in petroleum refineries where the operation (distillation, thermal cracking…) requires large amounts of energy that can be recovered from exothermic reactions or hot process streams in integrated systems.
Doheim et al.described the integration of rotating regenerative heat exchangers in 4 refining processes (two crude distillation units, a vacuum distillation unit, and a platforming unit) in order to reduce the current losses (25 to 62% of total heat input) to the values of 9.9 to 37.3%. At the low temperature (<200° C), the best uses are the regenerative (recuperative) heating of feed-stocks (process internal reuse), district heating and LP steam generation. District heating (or tele-heating) is a system for distributing heat generated in a centralized location for residential and commercial requirements via a network of insulated pipes (mainly or pressurized hot water and steam). In alternative, low temperature waste heat can be used for the production of bio-fuel, space heating, greenhouses and eco-industrial parks. In the industrial complexes, requiring large amount of freshwater and located near the sea, a viable alternative is that of desalinate seawater via thermal processes as Multiple Effect Distillation and Multi Stage Flash Desalination in order to obtain demineralized, potable or process water.
The generation of electricity from thermal energy should be taken into account if there are not viable options of in house utilization of additional process heat or neighbouring plants’ demand. The most commonly system involves the steam generation in a waste heat boiler linked to a steam turbine in a Rankine Cycle (RC). Industrial examples can be easily found in the literature. Steam Energy WHP from Petroleum Coke Plant, located at Port Arthur (Texas), recovers energy from three petroleum-coke calcining kilnsat temperature higher thant 500°C for producing LP steam (to use at an adjacent refinery) and 5 MW of power(saving an estimated amount of 159,000 tons per year of CO2 emission).
Since the thermal efficiency of the conventional steam power generation becomes considerably low and uneconomical when steam temperature drops below 370 ˚C, the Organic Rankine Cycle (ORC) utilize a suitable organic fluid, characterized by higher molecular mass, a lower heat of vaporization and lower critical temperature than water (silicon oil, propane, haloalkanes, isopentane, isobutane, pxylene, toluene, etc.).
These enable the utilization of lower temperatures (if compared to the RC) and a “better”coupling (lower entropy generation) with the heat source fluid to be cooled. The higher molecular mass enables compact designs, higher mass flow and higher turbine efficiencies (as high as 8085%). However, since the cycle works at lower temperatures, the overall efficiency is only around 1020%. As abovementioned, it is important to remember that low temperature cycles are inherently less efficient than hightemperature cycles. Jung et al., 2014, reported a techno-economical evaluation of an ORC cycle (with pure refrigerant and mixtures of R123, R134a, R245fa, isobutane, butane, pentane) to recover the heat from a liquid kerosene to be cooled down to control the vacuum distillation temperature. An example of a recent successful ORC installation is at a cement plant in Bavaria (Germany) to recover waste heat from its clinker cooler (exhaust gas @ 500°C) providing the 12% of the plant’s electricity requirements and reducing the CO2 emissions by approximately 7000 tons/year. Several R&D projectsand commercial plantsare reported in the references (footnotes).An example of T-s diagram of an ORC with Cyclo-Pentane (MW 70, boiling point 49,5°C) developed by GEis showed in Figure 3.Also, ElectraTherm applies proprietary ORC to generate power from low temperature heat by utilizing, as fuel in industrial boilers, the natural gas that would otherwise be flared.
The Kalina cycle(KC)utilizes a mixture of ammonia and water as the working fluid (with a variable temperature during evaporation). It was invented in the 1980s and the first power plant (6.5 MW, 115 bara, 515 ºC )was constructed in California (1992) and followed by many plants in Japan, Pakistan and Dubai.The KC allows a better thermal matching with the waste heat source and with the cooling medium in the condenser achieving higher energy efficiency.Although the Kalina systems have the highest theoretical efficiencies, their complexity still makes them generally suitable for large power systems of several megawatts or greater.
In addition to these cycles, some advanced technologies in the research and development stage can generate electricity directly from heat. These technologies include the Stirling engine, thermoelectric, piezoelectric, thermionic, and thermo-photovoltaic (thermo-PV) devices. Although they could in the future provide additional options for carbon-free power generation, nowadays show very low efficiencies. Keeping in mind that a Carnot engine operating with a heat source at 150ºCand rejecting it at 25ºC is only about 30% efficient, all these system shows global efficiencies in the range 1-10%.As an example, in the piezoelectric power generation(PEPG), a thin-film membrane is used to create electricity from mechanical vibrations from a gas expansion/compression cycle fed by waste heat (150-200°C). The temperature change (across a semiconductor),inducing a voltage (through a phenomenon known as the Seebeck effect), is implemented in the Thermoelectric generation(TEG).Öström and Karthäuser recently claimed a method for the conversion of low temperature heat to electricity and cooling,comprising CO2 absorption and an expansion machine.
Finally, recent R&D efforts in the use of saline solutions at different concentrations enabled the heat conversion into electricity in the lowest temperature range of application. This is possible by making use of heat engine based on Salinity Gradient Energy(SGE) (or Salinity Gradient Power, SGP) technologies.
Salinity Gradient energy is a novel non-conventional renewable energy related to the mixing of solutions with different salinity levels, as occurs in nature when a river discharges into the sea. Clearly,when this mixing process spontaneously occurs, the associated energy is completely dissipated during the process. Conversely,this energy can be harvested by adopting a suitable device devoted to perform a ”controlled mixing” of the two streams at different salinity (e.g. river water and seawater).Depending on the device type, different technologies have been proposed so far: the Chemical Engineering Research group of the University of Palermo, involved in this field of R&D activities, recently edited a book where Pressure Retarded Osmosis (PRO), Reverse Electrodialysis (RED) and Accumulator mixing (AccMix) are indicated as the most promising technologies.
When employed within a closed loop,each SGP technology can be used to convert waste heat into electricity. This concept is named Salinity Gradient Power Heat Engine (SGPHE) (Figure 5) and consists of two main units:
The adoption of the closed loop opens room to a large variety of advantages and possibilities with respect to open-loop SGP technologies. Just as an example, the closed loop does not require the need of natural/artificial basins of solutions at different salt concentration in the same area. More important, no pre-treatments are necessary and any kind of solute or solvent can be employed with the aim of maximizing the power production and the cycle efficiency. In this regard, according to recent estimates, it appears that SGPHE (i) can be operated at very low temperatures where no alternative technologies exist and (ii) can potentially achieve exergetic efficiencies higher than any other technology.
Highly polluted sites are present all over the world and particularly in countries that, in the last years, have seen uncontrolled and unplanned economic development. They are the result of earlier industrializationand poor environmental management practices that caused the alteration of groundwater and surface water, air quality, the hampering soil functions, and the polluting in general. In Europe there are about 500000 contaminated sites and two million of potentially contaminated sites. These are often made from the retired industrial, extractive and military activities.The U.S. Department of Energy (DOE) manages an inventory of sites including 6.5 trillion liters of contaminated groundwater (equal to about four times the daily U.S. water consumption) and 40 million cubic meters of soil and debris contaminated with radionuclides, metals, and organics. Some of the main contamination sources in this field are depicted in Figure 1.
Remediation represents the set of solutions such as the treatment, the containment or the removal/degradation of chemical substances or wastes so that they no longer represent an actual or potential risk to human health or the environment, taking into account the current and intended use of the site. As described by EPA, any Remediation management plan considers complex systems involving different pollutants and polluted matrix and should include all the impacted environmental aspects such as air quality, noise, surface water, soil quality, ground water management, floraand fauna, heritage as well as social, structural and safer aspect. The dispersion of the Non Aqueous liquid phase (NAPL) in Figure 1 depends on the site geotechnical characteristics, the aquifer relative positions and the pollutant chemical properties. Sometimes the contamination sources succeeds in reaching the groundwater pollution, such as at solid waste landfills where chlorinated organic compounds reach the groundwater due to rainfall water leaching.
Typical pollutants in this sector are aromatic hydrocarbons, heavy metals, pesticides as well as biological contaminants. The choice of a contaminated soil remediation technology is based oneconomic factors, the site-specific characteristics and of the remediation goal.Remediation technologies can be realized both on-site and off-site and act mainlyby Transformation (degradation of complex organic compounds to simpler intermediate, possibly up to the full mineralization) and removal from the contaminated matrix, typically for heavy metals, already in elemental form, which cannotbe further degraded. When these techniques cannot be accomplished or are too risky and expensive, immobilizationordinary Portland cement (OPC), water glass (sodium silicate), gypsum or organic polymers, for example acrylic or epoxy resins, covering with bentonite or polymeric membrane are the available options to ensure the isolation of the polluted site to reduce the water infiltration and the possible mobilization and migration of the elements.In this brief review, it is complicated to clearly distinguish the methods according to the contaminated matrix (being a phenomenon often multi-matrix&multi-pollutant) and to the possibility of realizing them close to the site or far away in centralized systems.Therefore, thetreatment are presented in relation to the technological nature of the process (physical-chemical, thermic and biological), as listed in Table 1.
Due to the recalcitrant nature or the toxicity of the main pollutants,incompatible with biological systems, it is necessary to implement chemical methods to neutralize these substances: to convert into less harmful forms, less mobile, more stable and inert) the substances.Injection of chemical reductants, including calcium polysulphide, has been used to promote contaminant reduction and precipitation within aquifers. TheIn-situ Oxidation consists in injecting oxidants such as hydrogen peroxide (H2O2) into the contaminated aquifer.
Contaminants that are well suited to remediation using this approach include metals with a lower solubility under reduced conditions (e.g. Cr (VI), through reduction to Cr(III) and precipitation of Cr(III) hydroxides). Advanced oxidation processes releasinghydroxyl radicals are the most affordable techniques to degrade organic recalcitrant pollutants. These include the use of H2O2, UV, O3, “Fenton reactants”, etc.
The physical treatments mainly consist in the separation of the pollutant.Alternatively, it is possible to isolate highly concentrated matrix to be eventually treated or sent to the final disposal. This solution avoids the addition of chemical reagents (and secondary pollutant formation) but should include costs for gas treating and for landfilling, especially for special waste.The air sparging is successfully applicable to volatile compounds (hydrocarbons and chlorinated solvents).Physical and geotechnicalcharacteristic of the soil as well as chemical properties of the pollutant are fundamental in the process analysis. The aquifer characteristic, if present, also influences the process. Natural zeolite has been studied extensively for remediation of heavy metal-polluted soils due to its wide availability and low cost.
The Pump-and-treatinvolves removing contaminated groundwater from strategically placed wells, treating the extracted water after it is on the surface to remove the contaminates using mechanical, chemical, or biological methods, and discharging the treated water to the subsurface, surface, or municipal sewer system. Water from the aquifer is pumped through the wells and piped to the pump-and-treat facilities, where contaminants are removed through an ion exchange that relies on tiny resin beads, resembling cornmeal, packed into large tanks or columns. As the water travels through the columns, hexavalent chromium ions cling to the resin beads and are removed from the water.
Depending on the type of the reactive material and contaminants, the degradation may be complete ormay produces intermediates with different toxicity by the initial compounds.Therefore, very often the use of chemical-physical combined techniques (e.g. soil washing)could exploit the advantages of both.
While pump and treat of groundwater mainly include ex-situ treatments, Permeable Reactive Barriers (PRBs)can be used for the in-situ treatment of the waterscontaminated ground. As visible in Figure 4, a PRB consists of a continuous treatment zone, in its usual configuration, formed by the reactive material, installed in the subsoil in order to intercept the contaminated plume and induce the degradation of the contaminants from the mobile liquid phase.This technology is energy-saving since a reactive mediumwith a permeability higher than that of the surrounding soil has to be used,. In this way, remediation occurs under the natural gradient of the aquifer, without additional energy contribution except the groundwater hydraulic head.PRBs are defined Permeable Adsorptive Barrier (PABs) when adsorbing material is used as reactive one and contaminant removal is carriedout by adsorption6. Recently, academic research is focusing on the investigation of innovative configurations, such as Discontinuous Permeable Adsorptive Barriers, which is arranged as a passive well array with one or more lines at a fixed distance one another and filled with adsorbing materials (Figure 5). Comparing Continuous and Discontinuous Adsorptive Barrier configurations it can be found that the decontamination of the same volume of groundwater can be carried out by reducing the amount of the barrier volume, and consequently by reducing remediation cost,if a Discontinuous barrier is used, highlighting the technology and cost-saving innovation of this advanced configuration.
The biological remediation methods (BioSparging, Landfarming) are available for high permeable and homogeneous soils for the mineralization or conversion of organic contaminants (SVE, BV, BTEX, light hydrocarbons, non-chlorinated phenols) into less toxic forms, or more toxic but less bioavailable. This process primarily exploit the ability of microorganisms transform the polluting material part in the biomass and partly into less complex molecules (eventually to minerals, carbon dioxide and water). These processes have been tried to remove heavy metals from soil as well, using biological leaching (bioleaching) or redox reactions.These methods are also non-invasive and can bring potential beneficial effect on the structure and fertility of the soil.In addition to microorganisms, plants can accumulate and degrade the contaminants in the so-called phytoremediation process. This recovery method, called phytoremediation, takes advantage of the complex interaction between root system of plants, microorganisms and soil, and represent the most sustainable solution in this sector. A review is given by Puldorf and Watson.A typical plant may accumulate about 100 parts per million (ppm) zinc and 1 ppm cadmium. Thlaspi caerulescens (alpine pennycress, a small, weedy member of the broccoli and cabbage family) can accumulate up to 30,000 ppm zinc and 1,500 ppm cadmium in its shoots, while exhibiting few or no toxicity symptoms. A normal plant can be poisoned with as little as 1,000 ppm of zinc or 20 to 50 ppm of cadmium in its shoots. Phytoremediation has also been studied for degrading PCBs and PCDD/Fs .Some disposal methods for phytoremediation crops were proposed by Sas-Nowosielska et al..The most beneficiary is to use phytoextraction crops for energy production hence pyrolysis, gasification or combustion. The fate of trace elements during combustion, pyrolysis, fluidized bed and downdraft gasification were studied in the recent scientific literature.
TheThermal methodscan induce the separation of the pollutant meansdesorption / volatilization and its destruction or immobilization by fusion of the solid matrix. In the desorption of pollutants from contaminated soil, a major research effort has been initiated to characterize the rate-controlling processes associated with the evolution of hazardous materials from soils. The P.O.N. Research Project DI.MO.D.I. was focused on the treatment of soils contaminated by hydrocarbons by an innovative device that could represent the solution to many logistical problems that make difficult the “on-site” treatment. The device (sketched in Figure 6) developed consists ina mobile unit, installed on truck, completely self-sufficient, able to permit emergency safety and remediation in reasonable short times and low cost actions. The treatment unit utilizes a dual fluidized bed reactortechnology fed by the hot gas produced by a hot gas generator. The upper bed is aimed at soil drying while the lower bed is aimed at soil remediation by thermal desorption. The processes of soil draying anddesorption of volatile and semi-volatile organic contaminantsoccur by the direct contact air/solid particles promoted by the fluidization technology. The soil requires a pre-treatment based on shredding/pulverizing and dimension separation, in order to feed the soil with the optimal size for fluidization. Particle removal from desorption gaseous flow stream is carried out by dust separator units (fabric filter and cyclone).
The change in the average crude oil quality due to the scarcity of light oil reserves and to the increase of the use of shale oil, oil sand and bitumen is causing significant troubles to refineries, which are obliged to accept heavier feeds with very different physical properties (lower API gravity, higher amount of impurities). This has stimulated the development of new technology for upgrading the heavy and extra-heavy oils in order to improve their characteristics and, consequently, the refineries performance.
Heavy oils are classified as oil with a API gravity within the range 10°- 22°, whereas extra-heavy oils have a API gravity < 10°. Heavy oils and bitumen reserves geographical distribution is reported in Table 1 : these reserves are continuously increasing, replacing the light oil ones, and Oil&Gas companies found and developed competitive solutions to extract and treat these oils.
Clearly, there is the need to upgrade the heavy oils before feeding to the refineries, in order to improve the downstream products quality and to increase the topping distillate flows: the conventional upgrading processes include carbon rejection and hydrogen addition technologies. But, when the properties of the heavy and extra-heavy oils are critical, more effective solutions are needed to make the oil suitable for refineries feedstock. For this reason, researchers and industries are proposing a number of innovative solutions and some of them are already in the full-scale demonstration phase.
In the following, some of these new emerging oil upgrading configuration are presented. For a complete list of developed technologies, the author suggests the Castaneda, Munoz and Ancheyta review paper, which includes the description and comparison of 23 new processes.
The process is based on a circulating transport bed of hot sand to heat the heavy feedstock and convert them to lighter products. Then the upgraded products and the sand are separated in a cyclone and the products are quenched and routed to the atmospheric distillation unit.
The main benefits of the HTL configuration are that it can be integrated at the well-head and it is simple and cheap. The drawbacks are the large dimension of the equipment, the low volumetric yield of upgraded crude, the low capacity for extra-heavy oil processing, the high formation of coke and a low sulfur content reduction. At the exit of the upgrading plant, the oil reaches a API gravity of 18-19° and a 100°C kinematic viscosity of 23 cSt.
The technology development has been completed and Ivanhoe Energy is designing industrial plants in Canada, Latin America and the Middle East.
HCAT is a catalytic heavy oil upgrading technology developed by Headwaters Technology Innovations Group (HTIG). The process is composed by a catalytic reactor where a molecule sized catalyst is packed, assuring high conversion of the heavy oil. The main benefits of HCAT configuration are constant product quality, feedstock flexibility and flexible and high conversion (up to 95%).
Neste Oil Corporation’s Porvoo Refinery at South Jordan, Utah, is the first refinery which implements, in 2011, the HCAT heavy oil upgrading technology. More than 500.000 barrels of heavy oil are processed in their upgrading reactors every day and a refinery additional capacity of 200.000 barrels per day has been reached.
Viscositor technology is patented by the Norwegian company Ellycrack AS and it is based on the atomization of the heavy oil by means of a heated sand in a high-velocity chamber. Basically, the process is composed by the following steps (refer to the block diagram shown in Figure 2):
The advantages of the process are the low temperature and the low pressure required, almost self-sustained thanks to the coke formation in the reactor and the good quality of the final upgraded oil.
IMP configuration is a catalytic hydrotreatment-hydrocracking process of heavy oil at mild operating conditions, able to achieve high removal of metals, sulfur compounds, asphaltenes and a large conversion of the heavier share of the oil stream to more valuable distillates.
The most important characteristic of the IMP process is the low fixed investment required and the low operating costs, with an attractive return of investment.
The IMP technology can be applied both for conversion of heavy and extra-heavy oils to intermediate oils and as a first processing unit for heavy and extra-heavy crude oils in a refinery. The final properties of upgraded oils, depending on the heavy oil feedstock, are: API gravity = 22-25°; sulfur content = 1.1 - 1.15 wt%; C7 asphaltenes = 4.7 - 5.3.
A first industrial unit application is being analyzed by Petroleos Mexicanos (PEMEX).
Nex-Gen is an innovative process for heavy oil upgrading which uses ultrasonic waves to break the long hydrocarbon chains and simultaneously adds a hydrogen stream.
Basically, Nex-Gen is a cavitation process: the ultrasonic energy forms cavitation bubbles in the heavy oil stream; then, the bubbles tends to collapse at high temperature and pressure, causing the breaking of the long chain of heavy hydrocarbon molecules.
The next figure shows a scheme of the Nex-Gen configuration.
A first industrial plant is going to be designed to be integrated near the Athabasca tar sands (Edmonton, Alberta), with a capacity of 10.000 barrels per day. The mild operating conditions (temperature = 0 - 70°C, pressure = 1 - 5 bar) allows a reduction of energy consumption and operating and maintenance costs by 50%.
Chattanooga process is a continuous process based on a fluidized bed reactor operating at high pressure and temperature in a hydrogen environment.
The main equipment of the configuration is the pressurized fluid bed reactor and associated fired hydrogen heater. The reactor can continuously convert oil by thermal cracking and hydrogenation into hydrocarbon vapors while removing spent solids.
The energy requirements associated with the Chattanooga configuration are significantly reduced in respect to the traditional heavy oil upgrading technologies, as well as operating costs and capital costs.
Well integrity is defined in the NORSOK D-010 (a functional standard which fixes minimum requirements for equipments of the oil and gas production wells) as "application of technical, operational and organizational solutions to reduce risk of uncontrolled release of formation fluids throughout the life cycle of a well".
Basically, technologies for well integrity include many aspects about well operating processes, well services, tubing and wellhead integrity, safety system testing, etc..
Clearly, production tubes have the greatest probability of failure since they are exposed to corrosive elements from the produced fluids. Moreover, the production tubing consists of many connections, which are points of weakness with high risk of leak. International standards impose the installation of two well barriers between the reservoirs and the environment in order to prevent the loss of containment.
In this paper, among the components of the production tube sealing system installed to avoid fluid losses, the innovative sealing materials are assessed and compared.
The most common used sealing material is the cement, which is a fully known and cheap materials. But, there are many properties not ideal for handling well integrity issue as, for example, gas migration through its structure, long term degradation due to temperature and chemical substances exposure, shrinking, etc.
The following figure shows the main problem in applying cement as sealing material in well casing.
For this reason, alternative materials for sealing are studied in order to overcome the issues related to the cement application.
Such materials have to assure a series of properties, among which:
An exhaustive list of the most interesting alternative materials is reported in 2. In the following, the most interesting ones (ThermaSet, Sandaband and Ultra Seal) are presented and described.
Table 1 - Mechanical properties comparison between ThermaSet and Portland cement.The excellent properties of the material are maintained over time, without showing significant decays: Figure 2 shows the compressive strength value after 1 year under a crude oil pressure equal to 500 bar, demonstrating that its value stabilizes at a value within the range 40-45 MPa6.
Sandaband is a patented material, owned by Sandaband Well Plugging (SWP), consisting of 70% to 80% quartz solids with a variable grain size diameter (between 1 µm and 2 mm). The rest of the volume is composed by water and chemicals that make the material easily pumpable.
All materials composing Sandaband are chemically stable, with no degradation over time or reaction with other chemicals.
An important property is that Sandaband behaves like a Bingham plastic material, characterized by the fact that it needs a shear stress to start flowing and then has a linear dependence between shear stress and strain, thus allowing that the materials quickly form a rigid body as the pumping is stopped (refer to Figure 3).
On an industrial scale, one can visualize a solar refinery (see Figure 1) that converts readily available sources of carbon and hydrogen, in the form of CO2 and water, to useful fuels, such as methanol, using energy sourced from a solar utility. The solar utility, optimized to collect and concentrate solar energy and/or convert solar energy to electricity or heat, can be used to drive the electrocatalytic, photoelectrochemical (PEC), or thermochemical reactions needed for conversion processes. For example, electricity provided by PV cells can be used to generate hydrogen electrochemically from water via an electrocatalytic cell.
However, hydrogen lacks volumetric energy density and cannot be easily stored and distributed like hydrocarbon fuels. Therefore, rather than utilizing solar-generated hydrogen directly and primarily as a fuel, its utility is much greater at least in the short to intermediate term as an onsite fuel for converting CO2 to CH4 or for generating syngas, heat, or electricity. Reacting CO2 with hydrogen not only provides an effective means for storing CO2 (in methane, for example), it also produces a fuel that is much easier to store, distribute, and utilize within the existing energy supply infrastructure.
The idea of converting CO2 to useful hydrocarbon fuels by harnessing solar energy is attractive in concept. However, significant reductions in CO2 capture costs and significant improvements in the efficiency with which solar energy is used to drive chemical conversions must be achieved to make the solar refinery a reality.
Solar energy collected and concentrated within a solar utility can be harnessed in different ways: (1) PV systems could convert sunlight into electricity, which in turn, could be used to drive electrochemical cells that decompose inert chemical species such as H2O or CO2 into useful fuels (see figure 2); (2) PEC or photocatalytic systems could be designed wherein electrochemical decomposition reactions are driven directly by light, without the need to separately generate electricity; and (3) photothermal systems could be used either to heat working fluids or help drive desired chemical reactions such as those connected with thermolysis, thermochemical cycles, etc. (see Figure 3). Each of these approaches can be used to generate environmentally friendly solar fuels that offer “efficient production, sufficient energy density, and flexible conversion into heat, electrical, or mechanical energy”. The energy stored in the chemical bonds of a solar fuel could be released via reaction with an oxidizer, typically air, either electrochemically (e.g., in fuel cells) or by combustion, as is usually the case with fossil fuels. Of the three approaches listed here, only the first (PV and electrolysis cells) can rely on infrastructure that is already installed today at a scale that would have the potential to significantly affect current energy needs. In fact, the PEC and photothermal approaches, though they hold promise for achieving simplified assembly and/or high energy conversion efficiencies, require considerable development before moving from the laboratory into pilot-scale and commercially viable assemblies.
The CO2 concentrations in the atmosphere are still low enough (0.04%) that it would be impractically expensive to capture and purify CO2 from the atmosphere, but other sources of CO2 are available that are considerably more concentrated. Power generation based on natural gas or coal combustion is responsible for the major fraction of global CO2 emissions, with other important sources being represented by the cement, metals, oil refinery, and petrochemical industries. Indeed, a growing number of large-scale power plant carbon dioxide capture and storage (CSS) projects are either operating, under construction, or in the planning stage, some of them involving facilities as large as 1,200 MW capacity. While solar PV energy conversion has the potential to reduce CO2 emissions by serving as an alternative means of generating electricity, harnessing solar energy to convert the CO2 generated by other sources into useful fuels and chemicals that can be readily integrated into existing storage and distribution systems would move us considerably closer to achieving a carbon-neutral energy environment.
Herron et al., in a very recent review, examine the main routes for CO2 capture from stationary sources with high CO2 concentrations derived from post-combustion, precombustion, and oxy-combustion processes.
In post-combustion, flue gases formed by combustion of fossil fuels in air lead to gas streams with 3%–20% CO2 in nitrogen, oxygen, and water. Other processes that produce even higher CO2 concentrations include pre-combustion in which CO2 is generated at concentrations of 15%–40% at elevated pressure (15–40 bar) during H2 enrichment of syngas via a water–gas shift reaction (WGS — see Figure 1) and oxy-combustion in which fuel is combusted in a mixture of O2 and CO2 rather than air, leading to a product with 75%–80% CO2. CO2 capture can be achieved by absorption using liquid solvents (wet-scrubbing) or solid adsorbents.
In the former approach, physical solvents (e.g., methanol) are preferred for concentrated CO2 streams with high CO2 partial pressures, while chemical solvents (e.g., monoethanolamine -MEA) are useful in low-pressure streams.
Energy costs for MEA wet-scrubbing are reportedly as low as 0.37–0.51 MWh/ton CO2 with a loading capacity of 0.40 kg CO2 per kg MEA. Disadvantages of this process are the high energy cost for regenerating solvent, the cost to compress captured CO2 for transport and storage, and the low degradation temperature of MEA. Alternatives include membrane and cryogenic separation. With membranes there is an inverse correlation between selectivity and permeability, so one must optimize between purity and separation rate.
Cryogenic separation ensures high purity at the expense of low yield and higher cost. Currently, MEA absorption is industrially practiced, but is limited in scale: 320–800 metric tons CO2/day (versus a CO2 generation rate of 12,000 metric tons per day for a 500 MW power plant). Scale-up would be required to satisfy the needs of a solar refinery.
Alternatives, such as membranes, have relatively low capital costs, but require high partial pressures of CO2 and a costly compression step to achieve high selectivity and rates of separation.
A very important point to consider about solar refinery reliability is that since carbon capture reduces the efficiency of power generation, power plants with carbon capture will produce more CO2 emissions (per MWh) than a power plant that does not capture CO2. Therefore, the cost of transportation fuel produced with the aid of CO2 capture must also cover the incremental cost of the extra CO2 capture. These costs must then be compared to the alternative costs associated with large-scale CO2 sequestration. Finally, one also needs to consider the longer-term rationale for converting CO2 to liquid fuels once fossil-fuel power plants cease to be major sources of CO2. Closed-cycle fuel combustion and capture of CO2 from, e.g., vehicle tailpipes, presents a considerably greater technical and cost challenge than capture from concentrated stationary sources.
Christos Maravelias and colleagues from the University of Wisconsin have recently modeled and analyzed the energy and economic cost of every step and each alternative technology contained in a solar refinery. The result is a general framework that will allow scientists and engineers to evaluate how various improvements in materials’ manufacturing and processing technologies that enable carbon dioxide capture and conversion to fuels using solar, thermal and electrical energy inputs would accelerate the development, influence the cost and impact the vision of the solar refinery. It will also enable evaluation of which alternative technologies are the most economically feasible and should be targeted or highlight those that even if developed would still be hopelessly uneconomic and can therefore be ruled out immediately.
The view that emerges from this techno-economic evaluation of building and operating a solar refinery is one of guarded optimism. On the subject of energy efficiency, it is clear that solar powered CO2 reduction is currently lagging far behind that of solar driven H2O splitting and more research is needed to improve the activity of photocatalysts and the efficacy of photoreactors. In the indirect process of transforming CO2/H2O to fuels, it is apparent that if the currently achievable solar H2O-to-H2 conversion (>10%) can be matched by solar CO2/H2-to-fuel conversion efficiencies, through creative catalyst design and reactor engineering, this would represent a promising step towards an energetically viable solar refinery. For the process that can directly transform CO2/H2O to fuels, improvements in conversion rates and product selectivity are key requirements for achieving energy efficiency in the solar refinery.
Economic efficiency is also a key to the success of the solar refinery of the future. For currently achievable CO2 reduction rates and efficiencies, the minimum selling price of methanol, a representative fuel, was evaluated by the techno-economic analysis and turned out to be more than three times greater than the industrial selling price analysis, even though the cost of the CO2 reduction step, which is estimated to be quite costly, was not included in the estimates. Improvement in the activity of CO2 reduction photocatalysts by several orders of magnitude would have a significant impact on the energy and economic costs of operating a solar refinery.
It is clear that the cost and energy efficiency of carbon capture and storage is an area where big improvements need to be made if the solar refinery is to be a success. One other point that is worth highlighting is the availability of water, since in some parts of the world the availability of water could be a big problem to face up.
To conclude, multidisciplinary teams of materials chemists, materials scientists, and materials engineers across the globe believe in the dream of the solar refinery and a sustainable CO2 based economy. Anyway it is clear that developing models to evaluate the energy efficiency and economic feasibility of the solar refinery, and at the same time identifying hurdles which have to be surmounted in order to realize the competitive processing of solar fuels, will continue to play a crucial role in the development of the required technologies.
The average $3 million drilling and fracturing process required for each well uses an average of 4.2 million gallons of water, much of which has traditionally been freshwater. The volume of water can vary significantly and is highly dependent on the length of the drilled lateral.
More than 99 percent of the fracturing fluid is water and sand, while other components such as lubricants and bactericides constitute the remaining 0.5 percent. This fracturing mixture enters the well bore, and some of it returns as flowback or produced water, carrying with it, in addition to the original materials, dissolved and suspended minerals and other materials that it picks up in the shale.
Once in production for several years, natural gas wells can feasibly undergo additional hydraulic fracturing to stimulate further production, thereby increasing the volume of water needed for each well. Approximately 10-25 percent of the water injected into the well is recovered within three to four weeks after drilling and fracturing a well. Water that is recovered during the drilling process (drilling water), returned to the surface after hydraulic fracturing (flowback water), or stripped from the gas during the production phase of well operation (produced water) must be properly disposed2.
The recovered water contains numerous pollutants such as barium, strontium, oil and grease, soluble organics, and a high concentration of chlorides. The contents of the water can vary depending on geological conditions and the types of chemicals used in the injected fracturing fluid. These wastewaters are not well suited for disposal in standard sewage treatment plants, as recovered waters can adversely affect the biological processes of the treatment plant (impacting the bacteria critical to digestion) and leave chemical residues in the sewage sludge and the discharge water. Many producers have been transporting flowback and produced water long distances to acceptable water treatment facilities or injection sites. But deep well injection now also meeting challenges.
The water disposal challenge has spurred a new water treatment industry in the region, with entrepreneurs and established companies creating portable treatment plants and other innovative treatment technologies to help manage produced water mainly focuses to water reuse.
Dealing with water scarcity and wastewater (i.e., brine) quality are top priorities in shale and tight gas production. Doing this requires water reuse technology that reduces the waste stream by efficiently separating out salts, heavy metals and nutrients to produce recovered water. Effective filtration must eliminate suspended solids from salt water going to deep well injection.
Cost can be an overriding factor in water treatment and processing decisions. There certainly are environmental considerations involved in using chemicals to perform operations such as frac- water treatment or salt removal and recovery. However, the cost of mitigating chemistry also comes into play. Chemical friction reducers make source water slicker for faster pumping, and then specialty chemicals like biocides, which kill microorganisms, and scale inhibitors, which control deposits, are added to the water. Mobile ultrafiltration technology can reduce the need for biocides – and the cost of treatment.
Slick water fracturing and horizontal drilling were revolutionary developments that made it economically viable to extract unconventional gas on a grand scale. Fracturing lowered the cost of moving the gas to the well bore, while horizontal drilling – which covered a vastly greater expanse of territory than a single vertical probe – exponentially increased the amount of gas that could be withdrawn. It became much more profitable to put wells into shale gas formations, but the cost of doing that business today depends, in no small part, on what ultimately happens to the brine. That, in turn, depends on geography. Chemical treatment is not the challenge so much as affordability; most brine is just discharged to disposal wells, but the fewer of these wells there are, the greater the production expense incurred, and in some parts of the country, geology or the lack of water makes disposal wells unfeasible.
In geographical areas, like Pennsylvania where there are major shale gas deposits, where the geology won’t allow disposal wells, the brine has to be trucked out for disposal elsewhere or cleaned for reuse or discharge. Not only is transportation potentially dangerous, it’s also expensive; trucking the frac-water from eastern Pennsylvania to Ohio for deep well disposal costs from $1.50 to $2.00 per barrel to dispose of produced water at the injection well plus getting the wastewater to the injection well requires many trucks each costing about $100/hour on an estimated six-hour typical trip in eastern Pennsylvania. Evaporation and crystallization technologies can recover almost all of the produced water as pure distilled water and create a salable salt product for uses such as road de-icing or grey water softening, but that adds another, higher level of costs. In the West, where water often can be inexpensive but scarce, it makes much more economic sense to clean up the wastewater and then sell it for land application.
In order the select the more suitable technology for water treatment there are issues related to the condition, as well as the cost, of water that must be addressed. Here are some of the principal ones:
While progress has been made on the water quantity and quality impacts of shale gas development, challenges remain, including the potential cumulative long-term water impacts of the industry. Therefore, additional water research and environmental policy changes will be necessary in order to fully realize the economic opportunity of the region’s natural gas wealth while safeguarding the environment.
In the following there are reported some interesting research project focused on water reuse.Project 1: Advancing a Web Based Decision Support Tools (DST) for Water Reuse in Unconventional O&G Development
The objective of this project is the development of database and a decision support tool (DST) selecting and optimizing water reuse options for unconventional O&G development with a focus on Flowback and Produced Water Management, Treatment and Beneficial Use for Major Shale Gas Development Basins.
The objective of this project is further develop and optimize engineered osmosis membranes and systems for treatment of unconventional O&G wastewater (see figure 3). As main project outcomes there are:
Figure 3 - Engineered osmosis process schemeProject 3: Advanced Biological Pretreatment
The objective of this project is the development and evaluation of cost-effective pre-treatment technologies for O&G wastewater with emphasis on biological filtration. The major outcomes and outputs are the substantial removal of dissolved organic carbon (96%) and chemical oxygen demand (89%) in produced water from the Piceance and Denver-Julesburg basins
Natural gas (NG) treatments are the processes needed to sweeten and purify the extracted NG before feeding it to the grid. Such processes are crucial to reach the gas purity targets and constitute a large fixed and operative costs for the NG production sector.
The main components to be removed in the NG purification process are the acid gases as carbon dioxide (CO2) and hydrogen sulphide (H2S) and, in many cases, the nitrogen (N2).
As reported in the following table, the contents of such components in the extracted NG stream can be high, leading to challenging and expensive separation processes.
|Groningen (Netherlands)||Laeq (France)||Uch (Pakistan)||Uthmaniyah (Saudi Arabia)||Ardjuna (Indonesia)|
1-Absorption processes, by which the components to be separated are absorbed on a liquid solvent in a packed column and then separated in the solvent regeneration step. The absorption of the component on the solvent can be chemical (chemical absorption) or physical (physical absorption). A wide applied absorption industrial process is the ammine separation (MDEA) unit for acid gases removal.
2- Adsorption processes, where selected components are adsorbed on the solid surface of specific particles. Then, increasing the solid bed temperature (Thermal Swing Adsorption - TSA) or reducing the pressure (Pressure Swing Adsorption - PSA), the gas is extracted and the solid is regenerated. The most applied adsorption process is the PSA, used to remove CO2 from natural gas streams by solid materials with a high affinity to carbon dioxide.
3- Cryogenic processes, known as low temperature distillation, which uses a very low temperature for purifying gas mixtures in the separation process exploiting the different gas components volatility. It is not applied for the acid gas removal from natural gas due to the low concentrations needed that makes the application of this technique not economical.But, a growing interest is given to separation processes using selective membranes thanks to their ease of operation, flexibility, smaller footprint and lower capital requirements.Basically, a membrane allows the transfer of certain components but not of the others, thus leading to a separation. A schematic layout is reported in Figure 1.
Compared to the other natural gas separation techniques, the membrane process needs a lower energy requirement since it does not involve any phase transformation. Moreover, the process equipment is very simple with no moving parts, compact, relatively easy to operate and control, and also easy to scale-up and scale-down.In order to be applied in an industrial process, a selective membrane must have the following properties:
The permeability increases reducing the selective layer thickness, but at the same time, both the selectivity and the mechanical resistance are penalized with thin membranes. Therefore, the membrane design requires an accurate optimization. Usually, the applied membranes are composite and fabricated depositing a thin selective layer on a support able to assure the needed mechanical properties (refer to Figure 2).
In the following, some examples and applications of membrane applications for CO2, H2S and nitrogen removal from natural gas are reported.
Carbon dioxide is the largest contaminant found in natural gas and, for this reason, a strong effort has been devoted to discover solutions to apply selective membranes in the CH4/CO2 separation process.Currently, the only commercial membranes applied for CO2 removal are polymeric, made by cellulose acetate, polyimides, polyamides, polysulfone, polycarbonates and polyetherimide. The most widely used material is cellulose acetate as used in UOP’s membrane systems: the Separex membrane system has been applied in a number of large NG plants installed worldwide (refer to Figure 3),. Another widely applied commercial product is the cellulose tri-acetate (CTA) membrane developed by Cameron and called CYNARA: such a membrane is applied in world’s largest CO2 membrane plant for natural gas clean-up (700 MMcf/d).
Also Air Liquide has developed a membrane module for the purification of NG by removal of CO2, H2S and water steam. The system is called MEDALTM and is able to reach the pipeline specification of 2 - 5% CO2 and 4 ppm H2S. Moreover, the membrane unit can be also used as a pre-treatment, removing the majority of CO2 and H2S, followed by a typical amine process to further remove carbon dioxide.Another product is proposed on the market by ProSep: the membrane is fabricated in a flat sheet and the arranged into a spiral wound module, then inserted into steel pressure-containing tubes. Such a membrane module has been applied in a number of plants in U.S.A. and Colombia.
The polymeric materials lead to a good separation performance but are poisoned by aromatics, organic liquid and water. For this reason, pre-treatment units have to be installed before the membrane separation device, leading to an increase of the costs and of the plant complexity.Some innovative membrane technologies have been developed and installed. As an example, the CO2 separation membrane provided by Membrane Technology & Research (MTR) is a new polymeric membrane able to withstand the various components of the NG mixture, thus reducing the impacts of the pre-treatments.
On the contrary of the CO2 removal process by means of membranes, which now sees many industrial applications, the removal of H2S is still in a phase of pre-industrial development. The most interesting technologies are developed and tested by Membrane Technology & Research.MTR develops the SourSep™ systems bulk removal of H2S from pressurized sour gas. The proposed architecture is based on a simple single stage process, able, thanks to a proper membrane installation, to assure a bulk removal of H2S (>75%). The permeate stream generated in very sour and can be re-injected in the extraction well or processed in a conventional Claus unit. The retentate stream has to be fed to other H2S removal unit (amine absorption or a scavenger process) to further reduce the sulfur content. Figure 4 shows a SourSep™ installation.
Another membrane application for H2S removal, always proposed by MTR, is for the achievement of the stringent H2S content target (< 40 ppm) if the NG is fed to an engine or a gas turbine. Such low composition is required to avoid the mechanical components corrosion and damage. A scheme of such a process, proposed by MTR, is illustrated in Figure 6: after the NG compression, a raw gas stream is sent to a first filter and then to the membrane unit, able, also thanks to the high pressure and, consequently, to the large pressure driving force through the membrane, to drastically reduce the sulfur content.
Also UOP has developed and applied a polymeric membrane for the removal of H2S, testing it in a pilot plant and thus demonstrating the membrane stability at a wide operating conditions range and the proper values of permeability and selectivity.
Selective membranes are proposed also for NG denitrogenation but, according to the DOE, the challenge of developing competitive membrane for N2/CH4 separation is not yet overcome.Both glassy polymers (nitrogen-permeable) or rubbery polymers (methane-permeable) membranes can be applied. But, while a nitrogen/methane selectivity of 15 at least is required to make the denitrogenation membrane economically competitive, the highest selectivity available with current polymers is only about 2-3. Therefore, strong R&D efforts are required.Some interesting studies can be found in the scientific literature, as the works published by the University of Massachusetts and the Aachen University.Currently, MTR and CB&I are the only manufacturers of membranes for nitrogen removal. The membrane module they developed, called NitroSepTM, have been applied for up to 20 MMSCFD NG plants and nitrogen composition up to 15% (refer to Figure 7).
Energy recovery and process integration is the most direct solution if for the increasing of process efficiency. In the industrial processes (in in particular in the chemical and petrochemical sector) the performance improvement is mandatory in order to face the climate change as well as the growing energy crisis. This objective can be achieved by integrating systems for the simultaneous minimization of the objective functions: the investment cost and the energy consumption.
By analysing the heat transfer, the optimal system is the one that balances the two abovementioned functions by identifying the most convenient way to transfer the heat between various fluids in the overall system (in a way compatible with the process control constraints, the need of spaces and the safety risks). This aspect is complex by its nature since the chances of interconnection in a plant configuration vary with the operating conditions. A systematic approach to this issue is given by Nishida and co-workers that individuated from the theoretical point of view the two main areas of the process integration: the identification of the different possible alternatives and the development of heuristic criteria to discard the worst solutions. The Pinch Analysis (PA) was born from these necessities by some academic works as the one developed in the ETH of Zurich and Leeds University in the 70s. The first systemic essay on the pinch technology was given by Linnhoff. He applied thermodynamic fundamentals for improving the process efficiency, saving energy, reducing the investment cost and optimizing the process control. By analysing the heat flow cascade, Linnhoff defined the pinch point as the temperature level corresponding to a zero heat flux between the hot and cold fluid (Fig. 1) and proposed the graphical approach based on the Grand Composite Curve in order to simply evaluate the pinch and the energy target. His works become the main textbooks on pinch analysis. He also established the Linnhoff March Ltd in 1983 offering process design services to international clients; in 90’ around 80% of all the world's largest oil and petrochemical companies become its clients or sponsors. The expanded edition of 2006, "Pinch Analysis and Process Integration" is the fundamental book of modern PA. These methods are now recognized also as fundamental for pollution prevention in the view of reusing and reducing the resources as well as optimizing end-of-pipe treatment and disposal.
Intuitively, the main field of application of PA is the optimization of Heat Exchanger Networks (HEN) present in complex systems. The concept, based on the thermodynamic analysis, do not use advanced unit operations for the performance improvement, but has the aim to match the cold and hot process streams with a HEN that minimize the external energy supply. According to the PA fundamentals, the first step is to draw the heating and cooling curves to evaluate the minimum temperature difference ΔTmin and the related energy target corresponding to reasonable values of the temperature differences.
The interval temperatures are used to compose a Grand Composite Curve (GCC) that gives the overall process overview in the temperature and heat flow diagram (Fig. 1). The smaller ∆Tmin the more heat can be transferred in the heat exchanger, but this will also lead to larger heat exchanger area which is costly. Hence, choosing an optimal ∆Tmin is possible only by integrating economic considerations.
The diagram is commonly divided into two sub problems defined by the pinch points (i.e. the constrained regions in which there is the minimum temperature difference between the streams). This approach has two main corollaries: do not transfer heat across the pinch; do not use external cooling above the pinch and external heating below the pinch (as visible in Fig. 2).
Globally, the application of Pinch analysis in the process industry is necessary for large, complex industrial facilities, where systematic methods are needed to identify the best opportunities to improve energy efficiency. The typical PA project is based on the fundamental stage of data acquisition (primarily heat loads and temperatures and economic parameters) regarding the process under consideration. Then, the analysis can be directed to:
This latest aspect can be characterized as the Mass Pinch analysis, developed by Mahmoud M. El-Halwagi, and Vasilios Manousiouthakis, consisting in a thermodynamic procedure used to identify the bottlenecks that limit the extent of mass exchange between the rich and the lean process streams (in order to improve the design and minimizing the cost).
Since the Oil & Gas Sector is one of the major energy user and supplier, is highly integrated from the point of view of heating and cooling power and it is, therefore, the optimal candidate for PA.
The group of the Politecnico of Milano developed many strategies based on the PA for the optimal design of steam generators, boilers and heat recovery steam cycles. Their “HRSC Optimizer” has been applied with interesting results on Fischer Tropsch (FT) synthesis processes (with high recovery of the unconverted gases) as well as integrated gasification combined cycle (IGCC-CCS). Joe and Rabiu improved the existing HEN of a Petroleum refining section revealing a 34% of energy saving by the definition of the optimal utility usage, number and surfaces of the exchangers. Yoon et al., suggested the retrofit of a ethyl benzene plant by PA with a payback time of less than one year and by reducing the opex of more than 5%. The application of PA in the retrofit design of the Tula distillation units is described by Briones in 1999 on the Oil&Gas Journal. The reduction of the fuel consumption by more than 40% (8 M$/year) with a payback period of less than 2 years are among the main claimed results. An integrated design of the atmospheric and vacuum distillation units exploited opportunities for heat recovery and removed inefficiencies such as the use of stripping steam instead of reboilers, the use of heat sources (for example, vacuum residue and pump-arounds), the cogeneration in the steam and power plant.
A. Posada and V. Manousiouthakis studied the methane reforming based hydrogen production plant with the purpose of finding minimum utility cost (hot, cold and electricity). Keshavarzian et al., described the PA of the para-xylene separation unit of Borzouyeh Petrochemical Company. Rossiter reported a detailed example of PA in crude distillation unit. After data acquisition, individuation of the energy target and the major inefficiencies, he individuated the main opportunities for retrofit desig: i) to rearrange existing heat exchangers to increase feed preheating and/or steam generation; ii) to add heat transfer area to existing matches between hot and cold streams; iii) to add new exchanger to introduce new matches between the streams8. His retrofit design reached the recovery of 45% of the energy target (14 MBtu/h in the crude preheating and 12.2 MBtu/h for steam generation at 120 psig) with a net saving of more than 2.5 M$ and a payback period of about 3 years. Shahani et al. have suggested alternative design of hydrogen plants seen as a source of steam from waste heat recovery (apart from the primary purpose of producing hydrogen) because of the potentiality of the steam reforming to produce steam more efficiently than a conventional boiler. Further industrial case studies are reported on the IPIECA website.
For very large problems such as refining industries, mass and energy integration is necessary for reaching the best economic option. In similarity with the Heat exchange network, any synthesis process can be seen as the interconnection of different Mass Exchangers.
This broader vision derives from the concept of seeing a process as a converter of energy (degradation) and matter (separation). This systemic approach is typical of chemical engineering and process engineering that see any complex system as an integration of unit processes. This representation has been intuitively depicted by T. Gundersen in 2013 at the International Process Integration Jubilee Conference.
Examples of water and hydrogen PA in the oil & gas sector can be found for the Energy Recovery at a Fluid Catalytic Cracking (FCC) Unit. Rajesh et al. have presented an integrated approach to obtain possible sets of steady state operating conditions for improved performance of an existing plant, using an adaptation of a genetic algorithm that seeks simultaneous maximization of product hydrogen and export steam flow rates. The hydrogen PA in a petroleum refinery has been presented by M.K. Oduola and T.B. Oguntola that evaluated that the hydrogen margin between source and sink units has drastically reduced to about 17kNm3/h (~ 63% of reduction). Nelson and Liu created an automated pinch spreadsheet for the quick evaluation of hydrogen excess and the possible saving in the networks through the evaluation of sources and sinks by the Property Cascade Analysis (PCA) to establish the resource targets within a property integration framework. The fundamentals and the mathematical algorithms for wastewater minimization by PA can be found in the work of Wang and Smith.
Nevertheless, it is important to note that, if not bound properly and conducted by expert evaluators, the pinch analysis can lead to risky solutions or, simply, virtual solutions being not compatible with the system in which fall. The design must be therefore in depth examined by external expert auditors (in particular through the hazard analysis).
The large increase in the past century of industrial development, population growth and urbanization favoured the release of hazardous chemicals in the environment and a general global pollution. Several chemicals, including heavy metals and radionuclides, but also organic compounds such as pesticides, dyes, Polycyclic Aromatic Hydrocarbons (PAHs), may persistently accumulate in soils and sediments, thus potentially menacing human health and environment quality, due to their carcinogenic and mutagenic effects, and ability to bioconcentrate throughout the trophic chain.
The concern on toxicity risk and environmental pollution associated with chemical contaminants has called for the development and application of remediation techniques. In fact, a large effort has been devoted to find ways to remove contaminants from ecosystems. In particular, several strategies were devised to remediate and restored polluted soils, based on physical, chemical and biological methods. These techniques may be applied in situ, i.e. in the very contaminated soil, thus offering numerous advantages over ex situ technologies, whereby the soil is removed to be treated elsewhere. Thus, in situ remediation techniques do not require soil transportation costs and can be applied to diluted and widely diffused contaminations, thus minimizing dangerous intensive environmental manipulation. Conversely, ex situ processes imply the excavation of polluted soil and their decontamination to be conducted in a separate processing plant. Table 1 summarizes the main technologies for cleaning up polluted soils and the estimated costs for each treatment.
Depending on contaminants characteristics and soil properties, different soil remediation technologies can be applied with variable success. However, effective eco-friendly biological, physical and chemical remediation practices are being today preferred over the techniques which imply larger biotic and abiotic environmental impacts.
|Treatment||Approximate remediation cost (£/tonne)|
|Removal to landfill||Up to 100|
|Cement and Pozzolan based||25-175|
|In situ flushing||25-80|
|In situ bioremediation||175|
Bioremediation, either as a spontaneous or as a managed strategy, involves the application of biological agents to clean-up environmental compartments polluted by hazardous chemicals. Plants, microorganisms and plant-microorganism associations, either naturally occurring or tailor-made for the specific purpose, represent the main bioremediation active factors.
In contaminated soils, aromatic Anthropogenic Organic Pollutants (AOPs) can be degraded by bacteria or fungi via an aerobic or anaerobic metabolism or both. In aerobic metabolism, molecular oxygen is incorporated into the aromatic ring prior to dehydrogenation and subsequent aromatic ring cleavage. In anaerobic metabolic processes molecular oxygen is absent, and alternative electron acceptors, such as nitrate, ferrous iron, and sulfate, are necessary to oxidize aromatic compounds.
The effective agents in the transformation of organic pollutants are the microbial enzymatic system that, as powerful catalysts, extensively modify the structure and toxicological properties of contaminants or completely mineralize the organic molecule into innocuous inorganic end products. However, in order to be biodegraded, contaminants must interact with the enzymatic system within the biodegrading organisms. If soluble, they can easily enter cells, but, if insoluble, they must be transformed into soluble or more easily cell-available products.
Their main sources of these enzymes are fungi, such as wood-degrading basidiomycetes, terricolous basidiomycetes, ectomycorrizal fungi, soil-borne microfungi, and actinomycetes. Most fungi are robust organisms and may tolerate larger concentrations of pollutants than bacteria. In particular, white-rot fungi appear unique and attractive organisms for the bioremediation of polluted sites. A possible alternative to the bioremediation of polluted sites by microbial activity may be the direct application of cell-free enzymes after their isolation from microbial cultures.
Bioremediation of contaminants can be more rapidly accomplished by two methods, bioaugmentation and/or biostimulation. The process of bioaugmentation, as it applies to remediation of petroleum hydrocarbon contaminated soils, involves the introduction in a contaminated system of microorganisms that have been exogenously cultured with the aim to degrade specific chains of hydrocarbons. These microbial cultures may be derived from the very same contaminated soil or obtained from a stock of microbes that have been previously proven to degrade hydrocarbons. On the other hand, the biostimulation process implies the addition to polluted soils of nutrients in the form of organic and/or inorganic fertilizers, in order to stimulate the activity and proliferation of indigenous microbes. These may or may not be proved to aim the polluting hydrocarbons as a primary food source. However, the hydrocarbons are assumed to be degraded more rapidly in comparison to natural attenuation processes, probably because of the increased number of microorganisms induced by the greater amount of nutrients provided to the contaminated soil.
Phytoremediation of organic and inorganic contaminants involves either a physical removal of pollutants or their bioconversion (biodegradation or biotransformation) into biologically inactive forms. The conversion of metals into inactive forms can be enhanced by external conditioning of soils: enhancement of soil pH (e.g. through liming), addition of organic matter (e.g. sewage sludge, compost etc.), inorganic anions (e.g. phosphates) and metal oxides and hydroxides (e.g. iron oxides). Concomitantly, plants can play a role here in transforming contaminants in inactive forms by releasing different anionic species in soil and altering soil redox conditions.
The uptake of AOPs by plants occurs through two pathways. One pathway is the soil-water-plant cycle in which pollutants are uptaken from the soil solution and then transported up plant shoots within the xylem transpiration system. A second pathway involves the soil-air-plant cycle, in which AOPs are uptaken by aerial parts of plants either from soil particles adsorbed on plant leaves or directly as gaseous forms of AOPs after their volatilization from soil. Following plant uptake, AOPs are further translocated, sequestered, and degraded in plant tissues by other processes. The key parameters which influence the translocation of contaminants from soil to plant include the content of contaminants in soil (or water), their physical-chemical properties, the plant species, the soil types, and the time of exposure to plant.
The advantages of phytoremediation over other approaches is due to the inherent preservation of soil natural structure and to the free sunlight energy involved in the process, that enhances the content of degrading microbial biomass in soil.
The composting process is the biological decomposition of organic wastes under controlled aerobic conditions. In contrast to uncontrolled natural decomposition of organic compounds, the temperature in composting waste heaps can increase by self heating to the ranges which are typical of mesophilic (25-40 °C) and termophilic microorganisms (50-70 °C). The end product of composting is a biologically stable humus-like product that can be employed in several applications, e.g.: soil conditioner, fertilizer, biofiltering material, fuel. The composting process can concomitantly reach different objectives, such as the volume and mass reduction of biomasses, their stabilization and drying, and the elimination of phytotoxic substances and pathogens.
Composting is also a method to be employed in the decontamination of polluted soils, because compost is capable of sustaining various microbial populations potentially hydrocarbons’ degraders, such as bacteria, including bacilli, pseudomonas, mesophilic and thermophilic actinomycetes, and lignin-degrading fungi. Compost can also improve the chemical and physical properties of soil to be decontaminated, since it affects soil pH, nutrients and moisture content, soil structure, and microbial biomass population.
Unless coupled with more bioactive compost materials, the possible use of biochar in the remediation of contaminated soil appears limited by its inherent biological recalcitrance that depresses the activity of pollutants microbial degraders.
Inadequate mineral nutrient, especially nitrogen, and phosphorus, often limits the growth of hydrocarbon utilizing bacteria in water and soil. Addition of nitrogen and phosphorus to an oil polluted soil has been shown to accelerate the biodegradation of the petroleum in soil. It was reported that 18.7% and 31.2% higher crude oil biodegradation in soil amended with chicken droppings and fertilizer, respectively, compared to un-amended control soil after 10 weeks while degradation of crude oil in soil amended with melon shells as source of nutrients was 30% higher than those of un-amended polluted soil after 28 days.
Addition of a carbon source as a nutrient in contaminated soil is known to enhance the rate of pollutant degradation by stimulating the growth of microorganisms responsible for biodegradation of the pollutant.
It has been suggested that the addition of carbon in the form of pyruvate stimulates the microbial growth and enhances the rate of Polyciclic Aromatic Hydrocarbons (PAHs) degradation. Mushroom compost and spent mushroom compost (SMC) are also applied in treating organo-pollutant contaminated sites. Addition of SMC results in enhanced PAH-degrading efficiency (82%) as compared to the removal by sorption on immobilized SMC (46%). It is observed that the addition of SMC to the contaminated medium reduced the toxicity, added enzymes, microorganisms, and nutrients for the microorganisms involved in degradation of PAHs.
Therefore, utilization of organic waste in the bioremediation of soil seems a highly potential area. This will reduce the amount of organic waste sent to landfill, thus reduce the emission of landfill gases and also provide a cheap source of organic additive for the remediation purpose.
Figure 3 shows the biodegradation of a lubricating oil in soil (throughout the period of 98 days) are reported in Agamuthu et al. 2013. The results showed high biodegradation of used lubricating oil at the end of 98 days with soil amended with organic wastes compared to the control soil treatment. At the end of 98 days, used lubricating oil contaminated soil amended with cow dung showed the highest percentage of oil biodegradation with 94%, followed by soil amended with sewage sludge which is 82% compared to the un-amended control soil that showed 66% of biodegradation of oil at the end of 98 days. Used lubricating oil contaminated soil amended with organic wastes have greater oil biodegradability compared to un-amended control soil in this study.
The main difference of oil biodegradation between the soil amended with organic wastes and unamended soil treatment occurred during the 14-28 days, where biostimulation resulted in significant increase of oil biodegradation. The addition of nutrients stimulates the degradative capabilities of the indigenous microorganisms thus allowing the microorganisms to break down the organic pollutants at a faster rate.
In conclusion, bioremediation can be a viable and effective response to soil contamination with petroleum hydrocarbons and can be positively enhanced by the use of organic wastes.
The Life Cycle Assessment (LCA) allows to evaluate the interactions that a product or service has with the environment, considering its whole life cycle that includes the preproduction points (extraction and production of raw materials), production, distribution, use (including reuse and maintenance), recycling, and final disposal. So the objectives of the LCA are to evaluate the effects of the interactions between a product and the environment, and therefore the environmental impacts directly or indirectly caused by the use of a given product.
LCA can be conducted by assessing the environmental footprint of a product from raw materials to production (Cradle to gate), or to be extended to the whole product life cycle, including its disposal (Cradle to grave ). If the analysis is performed directly on the categories of environmental impact, such methodology is called "Mid-point approach". A viable and valid alternative is represented by the “End-point approach “ or " Damage-oriented approach"
In the first phase, the goal and scope of study are formulated and specified in relation to the intended application. The object of study is described in terms of a socalled functional unit. Apart from describing the functional unit, the goal and scope should address the overall approach used to establish the system boundaries. The system boundary determines which unit processes are included in the LCA and must reflect the goal of the study.
The second phase ‘‘Inventory’’ involves data collection and modeling of the product system as well as description and verification of data. This phase encompasses all data related to environmental (e.g., CO2) and technical (e.g., intermediate chemicals) quantities for all relevant unit processes within the study boundaries that compose the product system. The data must be related to the functional unit defined in the goal and scope phase. The results of the inventory are a life cycle inventory (LCI), which provides information about all inputs and outputs in the form of elementary fluxes between the environment and all the unit processes involved in the study.
The third phase ‘‘Life Cycle Impact Assessment (LCIA)’’ is aimed to evaluate the contribution to impact categories such as global warming and acidification. The first step is termed characterization. Here, impact potentials are calculated based on the LCI results. The next steps are normalization and weighting, but these are both voluntary according the ISO standard. Normalization provides a basis for comparing different types of environmental impact categories (all impacts get the same unit). Weighting implies assigning a weighting factor to each impact category depending on the relative importance.
Issues such as choice, modelling and evaluation of impact categories can introduce subjectivity into the LCIA phase. Therefore, transparency is critical to the impact assessment to ensure that assumptions are clearly described and reported.
The LCIA addresses only the environmental issues that are specified in the goal and scope. Therefore, LCIA is not a complete assessment of all environmental issues of the product system under study. LCIA cannot always demonstrate significant differences between impact categories and the related indicator results of alternative product systems. This may be due to
The last phase, named ‘‘interpretation,’’ is an analysis of the major contributions, sensitivity analysis, and uncertainty analysis. This stage leads to the conclusion whether the ambitions from the goal and scope can be met.
The interpretation should reflect the fact that the LCIA results are based on a relative approach, that they indicate potential environmental effects, and that they do not predict actual impacts on category endpoints, the exceeding of thresholds or safety margins or risks. The findings of this interpretation may take the form of conclusions and recommendations to decision-makers, consistent with the goal and scope of the study.
Life cycle interpretation is also intended to provide a readily understandable, complete and consistent presentation of the results of an LCA, in accordance with the goal and scope definition of the study.
The interpretation phase may involve the iterative process of reviewing and revising the scope of the LCA, as well as the nature and quality of the data collected in a way which is consistent with the defined goal.
The findings of the life cycle interpretation should reflect the results of the evaluation element.
The LCA analysis can be performed by using softwares (the most important and used are SimaPro, Boustead, Gabi) which implements several LCA methodologies. Among these, the most used methods at mid point level are:
As for the methods at end-point level (or damage level), one of the most interesting is the Eco-indicator 99. This approach deals with 11 mid-point impact categories (Carcinogenesis, Respiratory Organics, Respiratoty Inorganics, Climate Change, Radiation, Ozone Layer, Ecotoxicity, Acidification/Eutrophication, Land Use, Minerals, Fossil Fuels) further aggregated into representative macro-categories of overall damage: Human Health, Ecosystem Quality and Resources. The impact categories from carcinogens to ozone layer are then normalized and grouped in the macrocategory (end-point level or damage level) ‘‘Human Health’’ that takes in to account the overall impact (damage) of the emissions associated to the product analyzed on the human health. The categories ecotoxicity, acidification/eutrophication, and land use are included in the macrocategory ‘‘Ecosystem Quality’’ that accounts for the overall damage on the environment, while the ‘‘minerals and fossil fuels’’ are grouped in the macrocategory ‘‘Resources’’ that accounts for the depletion of non renewable resources. The impact category indicator results that are calculated in the characterization step are directly added to form damage categories. Addition without weighting is justified, because all impact categories that refer to the same damage type (like damage to the Ecosystem Quality) have the same unit (for instance, PDF*m2yr; PDF, potentially disappeared fraction of plant species). This procedure can also be interpreted as grouping. The damage categories (and not the impact categories) are then normalized on an European level (damage caused by 1 European per year), mostly based on 1993 as base year, with some updates for the most important emissions.
Due to its complex and polluting composition, norms regarding the discharge of produced water into the environment have gradually become more and more limiting and strict. The costs of appropriate produced water treatments amount to about 40 billion dollars per year and they weigh clearly on the price of final products. For this reason, it is necessary that the water can be reused after being treated, this is especially true in arid places where water is a valuable and precious asset. The aim of this case study is to highlight the importance of treating the produced water, and understand their environmental importance. The assessment includes the entire life cycle of the process: the extraction and processing of raw materials, manufacturing, transportation, distribution, use, reuse, recycling and disposal.
the LCA method is applied to the most important produced water treatments, by using as process simulator Gabi 6. The analysis and the comparison have been made in for the two cases:
Primary treatments accounts mainly of physical treatments aimed to the removal of suspended oil, while secondary treatments are focused on the removal f dissolved organic compoundes (mainly BTEX). The application of tertiary treatments (membranes) is necessary to make the produced water suitable not only for the disposal but to be used in civil and industrial fields. In this way it can represents a resource with economic value, rather than an oil extraction waste.
Figure 6 reports the LCA results comparison for the two systems under analysis in terms of three important impact categories of mid point level, which accounts for the global waming, the ecotoxicity and human health. As it can be see from the figure the presence of secondary and tertiary treatments strongly reduces the impact on ecotoxicity and human health, while the global warming effect is higher than that of system 1 (only primary systems) mainly due to incidence of GHG gases produced during the secondary and tertiary treatment processes.
Liquefied Natural Gas (LNG) is used for transporting natural gas (NG) to distant markets, not supplied by NG grid connecting the extraction/production point to the users.
Basically the LNG process is composed by the following steps:
But, the high production, transportation and storage costs have reduced the LNG technology spread to specific cases in which there are not other cheaper ways to transport the NG.
But, the market and political issues related to the NG are increasing the interest on this alternative transportation technology, which has the benefits of enlarging the potential markets for sellers and the potential suppliers for the buyers (refer to Figure 1). The growing interest has led to greater and greater investments on LNG Research & Development and on its applications.
In the following, some of the technologies and innovations related to the LNG production, the transportation and the regasification fields are reported and assessed.
The pretreatment unit, where the undesired substances are removed, is the same used in the conventional production/distribution process and is composed by separations units and a slug catcher able to separate the gas from oil and water phases.
Then, the NG is purified from acid gases as hydrogen sulfide (H2S) and carbon dioxide (CO2) by means of absorption/adsorption processes. Also in this step, conventional technologies are used.
In the step 3, an adsorbent is used to remove water from the natural gas from which impure substances have been removed. By this way, ice will not form during the subsequent step.
Then, the NG is ready to be liquefied in the core unit of the process, the liquefaction unit, in which the NG is cooled down and liquefied at –160°C or less. Because of the extremely low operating temperatures need, the liquefaction process requires a enormous amount of energy, usually supplied burning a share of the NG feedstock. The R&D efforts are focused mainly of this step, proposing innovations able to reduce the energy consumption and improve the liquefaction process efficiency.The main liquefaction processes and innovations are:
Since all these configurations require large amounts of energy (mainly for the refrigeration compressors), growing R&D efforts are devoted to the process optimization. The main R&D activities are focused on the cryogenic heat exchanger design and optimization (Air Product and Chemicals Inc. technology), on the improvement of refrigerant compressors (SplitMR technology) and on the efficiency of the compressors’ drivers.
Basically, two vessel technologies are applied:
The R&D on the sector is mainly focused on the improvement of FSRU performance and reduction of costs, being the FSRU an attractive fast track solution for small markets and emerging economies.
The regasification facilities are able to boil the LNG and to sent it into the NG grid. Almost 100 LNG regasification terminals are now operating worldwide and many others are under-construction, mainly in Europe and Asia.The most applied regasification technologies are:
The effects on the human health and the impact on the environment due to the exposure to and the presence of heavy metals as lead, cadmium, mercury and arsenic have been extensively studied by international bodies as WHO, attesting clearly a significant negative impact also at low metals composition. Although adverse health effects have been known for a long time, the exposure to heavy metals continuously increases due to their extensive use in industry  (refer to Figure 1).
Specifically, the soil contamination by heavy metals is particularly dangerous for humans and for the ecosystems since most metals do not undergo microbial or chemical degradation and their concentration in soils persists for a long time and it is accumulated. The main risks associated are listed as follow :
The soil contamination is an increasing issue for the expansion of industrial areas, disposal of high metal wastes, leaded gasoline and paints, land application of fertilizers, sewage sludge, pesticides, wastewater irrigation, coal combustion residues, spillage of petrochemicals .
Some technologies have been developed worldwide for the remediation of contaminated soil. The most widely applied are:
But, the most interesting technology in terms of cost, efficiency and easiness in management is the Electro-Kinetics Remediation (EKRT): an electric field is generated by two electrodes inserted into the ground and encapsulated in extraction wells and the electrically charged metal ions are transported, collected and removed from the soil (a conceptual scheme is reported in Figure 2 ).
Electro-Kinetics Remediation technology is known and applied since 20 years, but ENI, with the partnership of the University of Ferrara, has developed an optimized EKRT configuration for heavy metal recovery from contaminated soil, better described in the following paragraph.
ENI developed an optimized EKRT able to reduce the technology costs and to improve the application easiness, mainly for large-scale use. ENI’s EKRT can be applied to remove from the contaminated soil a wide variety of metals, as Zn, Pb, As, Cd, Co, Fe, Cr, Mn, Cu, Sn.
The main innovations introduced concern:
ENI has performed EKRT experimental tests on site using real soils. Both the single metal (Hg) and a more complex (many metals) decontamination applications have been assessed, with very promising results in terms of recovery efficiency and operative easiness.
In the following some images, taken from the ENI website, show the electrodes installation and the experimental phases.
DME (Dimethyl Ether) is an organic compound mainly used as aerosol propellant and as a reagent for the production of widely applied compounds as the dimethyl sulfate (a methylating agent) and the acetic acid.
Recently, companies as Topsoe, Mitsubishi Co. and Total focus their effort to promote DME as a new and sustainable synthetic fuel that can substitute the liquefied petroleum gas (LPG) or blended in fuel mixture thank to its excellent combustion properties (cetane number = 55-60). DME has the potentiality to be fed into diesel engine, which would be only slightly modified, and its combustion prevents soot formation,.
Also the DME conversion to hydrocarbons is a relevant emerging market. The processes usually known the general terms “Methanol-to-Hydrocarbons” (MTH), Methanol-to-Olefins” (MTO), Methanol-to-Propylene” (MTP), Methanol-to-Gasoline” (MTG) and Methanol-to-Aromatics” (MTA) are more effective if the starting reagent is DME instead of methanol.
For all these reasons, a projected value of DME market equal to 9.7 bln USD by 2020 is foreseen, with a yearly growth of 19.65% between 2015 and 2020.
DME is usually produced directly from syngas (CO/H2 mixtures with a eventual amount of CO2, typically below 3%) or by dehydration of methanol, which in turn is produced by syngas. Syngas can be generated from fossil fuels (coal, methane) or renewable sources as biomass or renewable electricity. Moreover, there is a growing interest on direct DME production from CO2-rich mixture.
In the following, an overview of DME production processes applied worldwide is reported and, then, the major production plants actually operative are described.
In industrial applications, the DME is produced from the syngas by means of two different configurations:
In the one-step process (direct production process), DME is produced directly from the syngas in one single reactor where a bifunctional catalyst supports both the methanol formation and the methanol dehydration according to the following reactions scheme:Methanol formation: CO + 2H2 ↔ CH3OH DHo = - 90.4 kJ/mol Water-gas shift: CO + H2O ↔ CO2 + H2 DHo = - 41.0 kJ/mol Methanol dehydration: 2CH3OH ↔ CH3OCH3 + H2O DHo = -23.0 kJ/mol Overall reaction: 3CO + 3H2 ↔ CH3OCH3 + CO2 DHo = -258.3 kJ/mol
The syngas is produced by means of a natural gas steam reforming or coal/petroleum residues gasification and, after the DME synthesis reactor, a purification unit, able to separate the DME from water and methanol in a double distillation stage is needed. The following figure shows a diagram of the one-step process.
In the two steps (indirect) process, the methanol formation from syngas and the DME production from methanol are supported in two separated reactors, where the specific catalysts (copper-based for the first, silica-alumina for the second) are packed. The figure illustrates the block diagram of this architecture.
The reactants of the DME synthesis process can be produced from renewable energy as biomass, solar and wind. By this way, the DME is a sort of liquid energy vector, able to store the renewable energy in a easily dispensable, easy applicable and high-energy density fuel.
Starting from biomasses as energy crops, agro-residue, forest residue, etc., a gasification process can be applied to generate a syngas stream to be fed to one-step or two-steps DME synthesis process. On the other hand, if the starting biomass is an organic trash, manure or sewage, an anaerobic digestion + pyrolysis system can be applied to generate the CO and H2 stream.
The hydrogen stream in the syngas mixture can be generated by an electrolyzer supplied by electricity produced from renewable power plants as photovoltaics and wind farms and then mixed with CO/CO2. By this way, the renewable energy is “stored” in the DME, which, being a liquid fuel, can be easily distributed, stored and used, differently from the hydrogen itself which has a series of unsolved issues related to the distribution and storage. The following scheme shows a conceptual layout of the DME production from solar/biomass energy.
Instead of the syngas, a CO2-rich feedstock can be supplied to the DME production process, thus converting the CO2 in a high added value product. By this process, the CO2, which is the main GreenHouse gas (GHG), is not emitted but is converted into a fuel which can be burned releasing again the carbon dioxide,,.
Such a configuration is less developed than the conventional syngas-fuelled process, but many research efforts are devoted to improve its performance since it would allow both the production of DME and the reduction of GHG emissions, thus reducing the carbon footprint of DME synthesis.
CO2 presence in the reactor environment leads to two main issues:
The research is focused mainly on the development of new catalyst, tailored for CO2-rich mixture conversion, and of selective membranes able to remove water from the reaction environment, promoting the methanol dehydration reaction and the DME production,.
The one-step and two-steps DME production processes are relatively well established, with a number of companies proposing the one-step (Topsoe, JFE Ho., Korea Gas Co., Air products, NKK) or two-steps (Toyo, MGC, Lurgi, Udhe) architecture.
Among the many applications for DME industrial production, the most interesting are listed below:
The Enriched Methane (EM) is a blend composed by Hydrogen and Methane which can be fed, if the H2 content is lower than 30% vol., into conventional natural gas internal combustion engines with a series of benefits in terms of  , ,  :
The EM can be distributed in the low-medium natural gas grid (if the hydrogen composition is lower than 20% vol.  )and stored by using conventional methane storage system, thus its application being competitive using available and low cost infrastructures. Moreover, since hydrogen has the highest mass lower heating value (kJ/kg), the blend’s heating value is greater than those of the methane itself, thus enriching the energy contents.
Basically, if H2 is produced by exploiting a renewable energy source (solar, wind, biomass), the EM is a sort of a hybrid energy vector (fossil + renewable) with a immediate and competitive potentiality to be applied and a reduced environmental impact due to the strong reduction of CO2 emissions (up to 11% wt. if a blend of 30% vol H2 is burned).
In the present article, the main routes to produce EM blends are investigated both from fossil fuel and from renewable energies. Then, some applications implemented worldwide are presented.
Natural gas steam reforming is the most used process for the massive production of hydrogen. The process is composed by the following reactions:
and it is strongly endothermic, thus requiring high temperature to achieve high conversion of methane (90% at 850-950°C). In the conventional process, the reactions occur in tubular catalytic reactors placed inside furnace where a share of natural gas (30% approx) is burned to supply the reactions heat duty. But, if a EM stream has to be produced, much lower methane conversion (< 20%) and, consequently, lower operating temperatures (450-500°C) are required to reach the hydrogen content specifics. The main consequence is that the lower thermal level can be targeted concentrating solar radiation by well-know technologies as the Concentrating Solar Power (CSP) developed by ENEA, able to heat up a molten salt stream up to 550°C, reaching a thermal level suitable for the process requirements . By this way, the hydrogen is produced exploiting a renewable source, improving the environmental footprint. The following figure shows a conceptual block scheme of the technology: after the low temperature reforming, a water gas shift reactor is installed to allow the conversion of CO into H2 and CO2; then the unreacted steam water is removed by condensation and the CO2 by an ammine-based absorption, while the EM stream is sent to the application.
A variation of the process is the Partial Oxidation Methane Reforming, where the heat duty is supplied thanks to the combustion of a share of input methane directly inside the adiabatic reactor. By this way, the energy needed to produce the hydrogen is fed by a fossil source.
Another process is the coal gasification, able to produce syngas (a mixture of methane, carbon monoxide, hydrogen, carbon dioxide and water vapor) from coal and water, air and/or oxygen. After the gasification reactor, a proper purification system allows to obtain a EM stream with the desired H2 composition.
Hydrogen can be produced from electricity by means of electrolyzers , which are able to dissociate the water molecule into hydrogen and oxygen. The electricity can be produced by renewable power plants as solar photovoltaic, wind farms, hydroelectric plants, etc., so that the hydrogen produced is completely CO2-free. Then, mixing the hydrogen with a methane stream, the EM blend is obtained and can be distributed by means of the natural gas grid. The following figure shows the renewable EM plant configuration.
By this architecture, it is possible to convert renewable electricity surplus into a high-added value product as EM, mitigating the intermittent nature of the renewable energy and avoiding overloading of the electricity network.
The biological hydrogen production by photosynthetic bacteria, algae or fermentative microorganism appears to be a promising alternative to produce EM.
In anaerobic digestion process different microorganisms are involved to produce methane from complex biomass (as food wastes, organic fraction of municipal solid waste, agro-industrial waste, algae, etc.) through four steps: hydrolysis, acidogenesis, acetogenesis and methanogenesis .
To produce EM, a two-phase processes has to be implemented, by which an appropriate separation of acidogenic and methanogenic phases allows to convert the complex organic material into hydrogen, carbon dioxide and volatile fatty acids during the first stage, and then a conversion of these biodegradable compounds into methane and carbon dioxide during the methanogenic stage.
Moreover, processes able to convert a biomass (solid or liquid) into syngas (CO + H2), as the gasification, can be applied to produce EM. The gasificator can be coupled with a water gas shift reactor where the following reaction is promoted:
producing hydrogen from carbon monoxide. Then, the hydrogen is purified from CO2 and the trace of CO, mixed with methane and used.
Some EM pilot applications have been implemented worldwide. Among them, the following have to be cited:
The global environmental situation of the Earth is becoming increasingly problematic and critical. The outlook for our future is increasingly gloomy. The major reason for this pessimistic outlook is the exploding number of people. At the same time, the consumption per person has risen tremendously in the developed countries. There is no doubt that the Earth will not be able to satisfy such increasing demand. Because of the developments described above, radical changes to the global situation and especially to the ecology are ahead. Air pollution, the greenhouse effect, and the noticeable impact of both on coastal areas, especially in the Third World, represent of course important critical points.
Today, the opportunity has fallen to us that we can try to get the necessary information on the overall situation by means of modern remote sensing methods. The advantage of this kind of environmental data supply is that information is obtained worldwide by a single standard, and at regular, short intervals, applying comparable measures. These aspects of regularity and comparability offer great potential because they provide the possibility of producing “snapshots” of the environmental situation at regular intervals.
From a general point of view, environmental monitoring can be defined as the systematic sampling of air, water, soil, and biota in order to observe and study the environment, as well as to derive knowledge from this process , . Monitoring can be conducted for a number of purposes, including to establish environmental “baselines, trends, and cumulative effects”, to test environmental modeling processes, to educate the public about environmental conditions, to inform policy design and decision-making, to ensure compliance with environmental regulations, to assess the effects of anthropogenic influences, or to conduct an inventory of natural resources .
Environmental monitoring can be conducted on biotic and abiotic components of any of Earth spheres (see figure 1), and can be helpful in detecting baseline patterns and patterns of change in the inter and intra process relationships among and within these spheres. The interrelated processes that occur among the five spheres are characterized as physical, chemical, and biological processes. The sampling of air, water, and soil through environmental monitoring can produce data that can be used to understand the state and composition of the environment and its processes.
Environmental monitoring uses a variety of equipment and techniques depending on the focus of the monitoring. For example, surface water quality monitoring can be measured using remotely deployed instruments, handheld in-situ instruments, or through the application of biomonitoring in assessing the benthic macro invertebrate community . In addition to techniques and instruments that are used during field work, remote sensing and satellite imagery can also be used to monitor larger scale parameters such as air pollution plumes or global sea surface temperatures.
When conducting oil and gas operations, there is a risk of impacting the marine environment. Generally, environmental authorities set up guidelines to monitor the environmental conditions around oil and gas production platforms.
Using results from a long-term survey programme, it is normally assessed:
As part of the monitoring surveys, several samples of sediment (see figure 2) at different monitoring stations can be collected in order to carry out:
Physical and chemical analyses on the samples can include:
Analyses of the collected benthic fauna can include:
Statistical analyses and available literature can be also used to evaluate the environmental state around the platforms.
Generally, to ensure high quality of collected results, all procedures complied with relevant international Health, Safety and Environmental (HSE) standards and with the requirements of local environmental authorities. This included performing the survey in accordance with respect to the:
Monitoring activities have been performed in all three countries to look at effects of discharges in the sediments and in the water column. Effects on migrating birds of flaring and light from offshore installations has been monitored at the Dutch Continental Shelf and studies on the effects of seismic activity on fish and marine mammals have been performed on the Norwegian Continental Shelf. An overview of the performed monitoring activities in the United Kingdom, the Netherlands and Norway are given in Tables 1, 2 and 3.
Monitoring of sediments contaminated by discharges of oil-based muds (OBM) has shown that the benthic communities close to the discharge points have been highly modified, and with a transitional zone with detectable effects on benthic fauna and an outer zone with no detectable effects on the fauna. This is shown in all three countries. The areas contaminated with OBM are decreasing and so are the benthic effects. The Dutch study found biological effects out to 250 meters from the discharge point 20 years after the discharge. The latest data from Norway show a total contaminated area of 155 km2 on the Norwegian Continental Shelf. This is chemical contamination and not biological disturbance and the area also includes sites where OBM has never been operationally discharged. Hydrocarbon contamination at these sites may be caused by produced water or accidental spills.
The Dutch study on effects of discharge of water-based muds (WBM) cuttings showed no detectable effect on the benthic community. Norwegian monitoring and one-off surveys have shown a disturbance of the fauna typically out to approximately 50 meters from single wells. The disturbance is most likely caused by the physical impact of the cuttings, and species living in or on the sediment dies. However, a rapid colonization is observed, but the composition of species may change if the grain size is changed. In areas with several production wells the area affected is larger and effects may be caused by other discharges than WBM and cuttings.
Results from Norwegian water column monitoring in the last few years show positive results in the sense that the methods used are now functioning. It is crucial to know enough about how the plume of produced water is mowing to be able to place the cages with test species at the right spots. The results show that caged mussels in the effluent accumulate PAH and that the levels decrease with increasing distance from the discharge.
The biological effects (biomarkers) also show gradients with stronger responses in the cages closest to the produced water discharge. The levels of PAH-metabolites suggest a moderate exposure level. The Dutch study showed an accumulation of naphthalene in blue mussel in a distance of 1000 meters from the platform. The analyses of wild fish in the Norwegian Tampen area have shown increased levels of DNA-adducts in haddocks. A different lipid content or lipid composition of the cell membranes has been shown in cod and haddock from the Tampen area compared to other areas in the North Sea. These effects may be due to the fish feeding on old cuttings piles, and are not necessarily a result of today’s produced water discharges. It is, however, not concluded what these findings mean for the individual fish, the populations or the ecosystems as such.
Other monitoring activities or studies than the monitoring of impacts of discharged have also been performed by the three countries. The Dutch study on birds suggests that the chance that flaring directly impacts a flock of birds is small and only significant at night during the migration periods.
Sound did not appear to have any affect on seabirds or songbirds during migration. But the study calculates that about 10 % of the total bird population crossing the North Sea is impacted in some way by the light emitted from the main deck at offshore installations. The Norwegian study on impacts of seismic surveys on fish showed that impacts (including mortality) on fish and their early life stages only occurred immediately adjacent (< 5 metres) to the sound source. This impact was not significant at the population level and did not affect recruitment into commercial stocks. Fish show a startle response to impulsive sound and the effect may be observed up to 30 km from the source.________________________
Particulate matter (PM) is a complex mixture of micrometric particles and liquid droplets made up of organic soot (VOCs) as well as inorganic particles as soil, dust, metals and acids (nitrates and sulphates). The particle size, fundamental for the transport as well as the health effects, is usually classified by the aerodynamic diameter, the size of a unit-density sphere with equivalent aerodynamic characteristics (Figure 1). This size can vary over four orders of magnitude in the atmosphere; the largest ones (coarse fraction), mechanically produced, include pollen grains, mould spores, wind-blown dust from agricultural processes, sea spray, uncovered soil, unpaved roads or mining operations; the smallest ones (fine fraction) are mainly formed from gases by nucleation and coagulation at a scale lower than 0.1-1 μm (accumulation range). Moreover, secondary aerosol can be formed by chemical and physical reactions in the atmosphere as acidic forms (from sulphuric and nitric acid) and ammonium salts (in the presence of ammonia). The carbonaceous fraction of aerosols is composed by organic matter (either primary or secondary if deriving from the oxidation of VOCs) and elemental carbon (EC, also known as black carbon, BC).
Sources and effects Figure 2 represents the contribution of PM pollution from different sectors and activities in european countries. The particles produced by combustion processes represent the largest portion of the anthropogenic sources. Large stationary sources are related to the Power Generation and, in minor part, directly to the oil & gas industry. The major exposure risks are related to domestic heating while transport (urban traffic and emission of the diesel engines of harboured vessels) is the second relevant source inhabited areas. Gas flaring is recognized as an important source of pollution, even though limited to specific zones . The uncontrolled gas flaring can generate emissions of unburned hydrocarbons, particulates and polycyclic aromatic hydrocarbons (PAH). Every year, approximately 140-150 billion cubic meters of natural gas is flared into the atmosphere (equivalent to three quarters of Russia’s gas exports, or almost one third of the European Union’s gas consumption ). In 2011 Johnson et al., measured the soot emission from a large gas flare in Uzbekistan; they highlighted a potentially dramatic environmental impact of gas flaring, calculating a soot emission rate of 7400 g/h comparable to ∼500 buses constantly driving and estimable in 275 trillion soot aggregates per second .
Exposure to particulate matter is associated to serious health effects as respiratory and cardiovascular disease depending on the specific particle size, morphology and composition. The PM size is directly linked to the damaging potential: very fine inhalable particles remain suspended in the atmosphere for a long time traveling long distances from the emitting sources and, once inhaled, reach the deepest regions of the lungs entering in the circulatory system. Generally, the lower is the particle size (and the higher specific area), the higher is its toxicity, also due to absorption of pollutants affecting human health with specific actions (carcinogen and mutagen compounds). Heart attacks with associated premature death, irregular heartbeat, asthma, decreased lung function, and several respiratory symptoms, such as irritation of the airways, coughing or difficulty breathing are among the PM exposure recognized issues . PM pollution is estimated to cause more than 50000 deaths per year in the United States and 200,000 deaths per year in Europe . Fine particles impact extended ecosystems by traveling over long distance, reducing the visibility, polluting ground and surface waters as well as acting on the climate changes and global warming (BC is the second most important climate warming agent after CO2, having a radiative index of 1.1 W/m2). An other climate effect is the cloud formation since they act as water condensation nuclei .
The removal technologies utilizes different strategies to separate solid particles from the flowing gas: to intercept particles by acting on the particle size and shape (filtration and scrubbing); to exploit external force fields such as gravitational, electrical and centrifugal.
Filtration In a Fabric Filter (FF), waste gas is forced to pass through a tightly woven or felted fabric, collecting particulate matter on the fabric by sieving and other related mechanisms. Fabric filters can be in the form of sheets, cartridges or bags (the most common type) with a number of the individual filtering units housed together in a group. When low particles loads occur, filter collection efficiency is primary related to the filter pore size and length. High particulate loading forms a “cake” on the filter surface increasing the collection efficiency. Fabric filters are used, primarily, to remove particulate matter (and other hazardous air pollutants in particulate form such as metals) at moderate loads (and gas flow rate limit of 2•106 Nm3/h) down to PM2.5. This technology is useful to collect particulate matter with electrical resistivity either too low or too high for Electrostatic Precipitator, so they are suitable to collect fly ash from low-sulphur coal or fly ash containing high levels of unburnt carbon . The cleaning intensity and frequency are important variables in determining removal efficiency (the dust cake provides an increased fine particulate removal) and the pressure drop across the fabrics (ΔP 100-500 mbar) and the consequent energy requirement (0.2-2 kWh/1000Nm3). Catalytic filtration is commonly adopted in the new generation of diesel particulate filters, DPF, for the automotive application. Commonly the oxidation catalyst and the particulate filter are combined and particles can be burnt off continually. The catalyst filter consists of an expanded polytetrafluoroethene membrane, laminated to a catalytic felt substrate. It is used to separate particulate and eliminate hazardous contaminants from the gaseous phase, such as dioxins and furans, but also aromatics, polychlorinated benzenes, polychlorinated biphenyls, volatile organic compounds and chlorinated phenols. The filtration efficiencies of DPF is > 99% for solid matter (globally > 90% considering a non-solid portion). These systems can be alternatively designed to trap a portion of the total particle load (e.g. the 70% instead of the 100%) in order to obtain a lower back pressure and a blocking risk.
Gravity and Centrifugal force Larger particles can be removed from flue gas by exploiting gravity/mass inertia and internal obstructions. A separator chamber can be installed as a preliminary step to prevent entrainment of the washing liquid with the purified waste gas and/or to remove dust, aerosols and droplets. Also abrasive particles can be treated in order to preserve the downstream equipment. The separation occurs by impact with properly designed internal surfaces, like baffles, lamellae or metal gauzes. The main advantages of separators are the suitability for higher temperatures as well as the lack of moving parts, which determines low maintenance and low pressure drop. On the contrary, the low removal efficiency makes it unsuitable for systems with small density differences between gas and particles. By exploiting centrifugal forces, the separation can be achieved through cyclones. In a purposely designed conical chamber, the incoming gas is forced into circular motion down the cyclone near the inner surface of the cyclone tube. Particles in the gas stream are forced toward the cyclone walls by the centrifugal force of the spinning gas; the larger ones, reaching the cyclone walls, fall down in a bottom hopper where are collected. These simple devices are used to primarily control particles over PM10 (pre-cleaners for more expensive final control devices such as fabric filters or electrostatic precipitators); high efficiency cyclones can be designed to be effective even for PM2.5. The main advances of classical separation chambers are kept in these conical arrangements.
Wet Scrubbing Wet scrubbers (WS) realize the interception of PM through the direct contact with liquid droplets. WS can assembled with variable geometries to each of which optimized in a specific gas flow rate; in relation to the contact dynamics they are arranged in the form of spray towers, packed bed scrubber and Venturi scrubbers (Figure 3). This last realizes the acceleration of the gas stream in a throat to atomize the scrubbing liquid and to improve gas-liquid contact (Figure 3).
Liquid scrubbers are used in case of removal/recover of flammable and explosive dusts as well as treatment of gaseous compounds. Furthermore, WS as the advantage to cool and supersaturate the gas stream leading to particle scrubbing by condensation. WS can operate at medium/high collection efficiency and low cost. On the other hand, the main disadvantages of WS are the risk of corrosion and freezing, the generation of a liquid by-product and the low particle collection efficiency in the 0.1-2µm range.
Electrostatic force The Electrostatic precipitator (ESP) utilizes electrical forces to move particles in gas streams into collector plates. It can be “wire-plate” if gas flows horizontally and parallel to vertical plates of sheet material and wire-pipe if the electrodes are long wires running through the axis of each tube. The entrained particles acquire an electrical charge passing through a corona field generated by discharge electrodes (DC voltage required in the range of 20-100kV). ESP has high efficiency and low pressure drop. Main disadvantages are related to the maintenance of the high voltage generation (electrodes cleaning) as well as the danger of dust explosion after discharges. In 2006 Javorek et al., have realized a comprehensive review of the wet ESP state-of-the-art for gas cleaning (mainly dust or smoke particles). In a single stage ESP, the charging and discharging (collecting at the electrode) take place in one device while in a two stage ESP, charging and removal of the particles occur in separate electric fields (and consequently separate chambers). The two stage ESP is common for small waste gas streams (<90000 Nm3/h) characterized by a high concentration of micrometric and sub-micrometric particles (e.g. smoke or oil mist). EPA gives a detailed overview of ESP types, configurations and designing procedure .
The more stringent environmental evidences and the recent emission regulations are forcing the development of more effective gas cleaning technologies (particularly effective in the submicronic sizes). The existing technologies have low efficiency in the particle diameter range 0.01-1µm, called Greenfield gap region . As aforementioned, the capture of the particulate matter is usually carried out by fabric filters and electrostatic precipitators, which are the actual best available technologies. However, these units shown limited efficiency in capturing particles of submicron or nanometres size. Moreover, the ESP technology is ineffective for particle resistivity out of the range 108-1011Ωcm and for gas streams containing water droplets. On the other hand, FF cannot be used if the water content in the flue gas can produce condense on the cake deposited on the bags. Therefore, a new challenge of the scientific research is the development of new cleaning systems to remove particles from flue gas and the optimization of the existing technologies in order to improve the particle capture of submicronic particles . An example is the research activity in the field of diesel particulate abatement where several strategies are under development, particularly in the ship emission context. As the emissions from diesel ship engine represent an emerging issue, the International Maritime Organization has enforced the environmental regulations. A consortium of European Universities and Industrial Partners developed a modular on-board process combining different units to remove specific primary pollutants (SOx, NOx, PM and VOC) participating to the European Seventh Framework Programm . The PM removal technology, developed by The University of Naples consisted in an innovative upgrade of a wet scrubbing device . In fact, the Wet electrostatic Scrubber (WES) increases the scrubber collection efficiency by sweeping the precipitation chamber with charged droplets. These act as small collectors attracting the particles due to Coulomb force. A practical example of this phenomena is the scavenging of atmospheric aerosol during thunderstorms with the achievement of highest removal efficiency . Different charging and spraying configurations are possible and PM can be charged either negatively or positively with opposite polarity droplets. A commercial application of this interesting technology is the Cloud Chamber Scrubber (CCS) by Tri-Mer Corporation (Figure 4) . It is composed of three zones: preconditioning chamber (A) for the removal of coarse particles and humidity/temperature adjustment; cloud generation vessel (B) for the removal of neutral and negative submicronic particles; second cloud generation vessel (C) with negatively charged droplets so that neutral and positive particles are captured. Afterwards treated air flows through a mist eliminator, before discharge (particles between 0.1 and 2.5 µm).
The enhancement in nanoscale-structured materials represents one of the most interesting innovative aspects bringing technological advances in many industries. Nanoparticle technology developments essentially concern materials engineering with the possibility of new metallic alloys ensuring high strength, low weight and high resistance to corrosion and abrasion. However, these materials can appear in different forms, from solid to fluid, with the possibility to have ad hoc nanoparticle-fluid combinations.
The upstream oil & gas industry could receive a great boost under the impulse of innovations in this field being based on processes exposing the equipment materials to extreme work conditions. Moreover, the developments of nanotechnology associated with suitable simulation tools allow to characterize interfacial phenomena between minerals and fluids (wettability etc.), causing a better understanding of the mechanisms concerning recovery of hydrocarbons. Currently, the shale gas and oil production increases the need of nanotechnology enhancement to better characterise the organic content in shale nanopores.
Almost every oil & gas company is heavily investing in nanotechnologies to enhance oil recovery, to improve equipment reliability, to reduce energy losses during production, to provide real-time analytics on emulsion characteristics; to develop high-performance products (e.g. high performance lubricating oils have a great relevance in oil industry). In the following, some recent applications in these fields, will be described.
The use of nanoparticles in Enhanced Oil Recovery (EOR) is one of the most important fields of application as it provides larger amounts of oil during the extraction, thus ensuring a faster return on investment. Different techniques using nanotechnology are being considered and very promising appears to be the use of nano-robots for real time insight into the well pad. These tiny robots will be able to provide operators with useful information to better conduct the drilling operations, for example adapting the additive mixtures or the operating pressure dynamically. In the EXPEC Advanced Research Centre has been realized some important works about the use of nano-robots in oil & gas reservoirs designing reservoir robots (called Resbots) used as nano-reporters. The main difficulty lies in adapting the resbots physical and chemical properties in order to pass through the tiny pores and then to recover them, but some experiments brought good results . By adding some sensors inside the robots, very important information will be obtained.
EOR could also be guaranteed by the use of nanoparticles dispersed in suitable fluids. Recently, Ogolo et al.  performed some EOR experiments using different nanoparticles like magnesium oxide, aluminium oxide, zinc oxide, zirconium oxide, tin oxide, iron oxide, nickel oxide, hydrophobic silicon oxide and silicon oxide treated with silane showing enhanced recovery and boosted hydrocarbon production. The effects resulting from the use of these substances are related to the change of rock wettability, reduction of oil viscosity, reduction of interfacial tension, reduction of mobility ratio and permeability alterations. A further example of using nanoparticles (in order to improve the oil recovery efficiency) as an additive during operations has been provided by University of Alaska Fairbanks  where some researchers highlighted the important performances guaranteed by the use of metal nanoparticles dispersed into supercritical CO2, responsible of the heavy oil viscosity reduction with consequent increasing of recovery efficiency.
One of the main problems in the oil & gas industry is the use of materials capable of withstanding highly corrosive environments. The use of sour crude is highlighting this problem, reducing the equipment lifetimes, particularly for pipelines and heat exchangers. The need to solve these problems has led to research in the field of nanotechnology, in order to develop nanostructured coatings able to increase the corrosion resistance. For example, Saudi Aramco, in collaboration with Integran , has realized an important research in this field carrying out a product development program called "Application of Nanotechnology for In-Situ Structural Repair of Degraded Heat Exchangers". The aim is therefore to develop products able to reduce the corrosion damage and the downtime due to maintenance. In aggressive environments with corrosion and high wear, the use of protective film is complex. Until few years ago electroplated "engineered hard chrome (EHC) was used for surface protection. EHC was preferred to Cadmium (Cd) or Zinc Nickel (ZnNi) electroplated metals because they offer low resistance to wear condition and are quickly removed. Highlighting the chrome toxicity which negatively affect workers, an overcome of EHC has been recently suggested. In this respect, Integran proposes electroplated nanocrystalline Cobalt, called Nanovate CoP which represents an innovative and cost effective overcoming of EHC. In figures 2-3-4 are shown the results of typical corrosion tests .
The heat loss during the operations for the oil & gas treatment is a very important problem. It has been estimated that about 50% of the supplied heat is lost in the equipment and this considerably lowers the process efficiency. Researches in this field are leading to the formulation of aerogel solutions that insulate the equipment surface. The use of nanotechnologies in this field is making a major contribution as proof by the realization of innovative products like Nansulate® by Industrial Nanotech, Inc . Nansulate® allows very low thermal conductivity through the use of nanocomposite called Hydro-NM-Oxide mixed with acrylic resin and performance additives.
A possibility offered by nanoparticles concerns the real-time analysis of emulsions extracted from wells. This is due to the injection of nanoparticles, then recovered. One of the major companies in this field is MAST Inc.  which develops instruments to identify the spectroscopic characteristics of the particles during the extraction operations. The particles contain a magnetic core and are covered by sensitive substances which detected the presence of sulfur, water or gas content. The experience in magnetic sensors has led to the development of techniques to observe them also in a fully opaque stream.
The importance of this technology is growing rapidly after the intense use of fracking, which assures more resources and a new development in oil exploration. However, fracking can also cause significant environmental impacts and therefore requires considerable efforts related to environmental monitoring. With this respect, the use of nanosensors enables the development of techniques to preserve the purity of groundwater in the well proximity.
The use of nanoparticles in addition to particular mixtures is bringing innovation in different industrial sector, allowing the development of new high-performance products which will positively influence the related industry. One of the most important innovation is offered by the use of a new generation of anti-wear lubricant oils. As shown in different works, experimental results prove remarkable improvements in the tribological behaviour (low wear and increased load-carrying capacity). The lubricant effect of different nanoparticles used as additives depends on material category and essentially concerns the properties of typical nanoparticle materials. These are summarized in table 2 and well described in Guo et al. 2013 .
Lubricants are products used mainly in engines to reduce friction among mechanical bodies. Contrary to the majority of petroleum products which are identified through several parameters (the specs), lubricants are commonly identified only by their real performance, which can be tested only experimentally in specialized laboratories. The most important lubricants’ spec is the Viscosity Index (VI), a measure of viscosity variation at different temperatures.
Lubricants are a blend of “base oils” and several additives. Base oils are generally produced from crude oils, but could also be produced by petrochemical feed-stocks (synthetic lubs). Additives are chemicals produced by few oil companies and some chemical company focused on this field as Lubrizol. The effective lubs performance strictly depends on the additives mixture. Additives and base oils are normally commercialized on the market, so the majority of companies buy and blend them. Lubricants, after used (exhaust oils), may be collected and reprocessed in order to obtain “second-hand” marketable products. Lubricants are among the most sophisticated and the most technology-intensive products of refining. Given the lower demand with respect to other petroleum products they are produced only in a limited number of refineries.
The mineral base oils quality strictly depends on the crude origin, also if it can be partially modified through refinery processes. The base oils are a mixture of hydrocarbons, including alkanes (paraffins), alkenes (olefins), alicyclic (naphthenes), aromatics and some “mixed hydrocarbons” (where in one molecule are different groups of the above molecules). Regarding the base oils production, the aromatics have a negative impact to the viscosity index. They also worsen the base oils characteristics, meanly increasing the deposit formations and reducing the oxidation resistance.
Over hydrocarbons base oils contain the non-hydrocarbon molecules normally present into crude oil. The main non-hydrocarbon components are sulphur, nitrogen and oxygen. The sulphur heterocyclics are the most abundant of them.
The base oils feed-stock is the vacuum heavy gas-oil and the following units are a solvent extraction to separated aromatics and a deparaffinization to extract heavy paraffines (waxes).
The solvent treatment may be replaced by hydrogen process, e. g. HDC, perfectly integrated and already present in some refinery. This allows good yields and excellent quality bases, although starting from a traditionally unsuitable crude. Figure 1 shows an integrated scheme for production of base oils, either through solvent extraction or through HDC. The process usually ends with a hydrofinishing unit which improves colour, stability, etc. Blending and additivation are the final steps.
Base oils cuts are internationally classified on the basis of viscosity SUS (Saybolt Universal Seconds) measured at 40 or 100 ° C (100 or 210 ° F). In addition, a code precedes the SUS viscosity value, such as, for example, SN (solvent neutral) or HVI (High Viscosity Index). The abbreviation BS (Bright Stock) is used for heavier cuts produced by the deasphalted residue. The crudes most suitable for base oil production are paraffinic ones, characterized by a high viscosity index (VI), but also by a high wax content. For certain applications, naphthenic crudes are more suitable because of the high-quality middle and low VI, the reduced content of wax and the naturally low sliding points.Paraffinic base oils Paraffinic base oils arising from paraffinic crudes are the most widely used.
The characteristics of these base oils depend on the original hydrocarbons composition, as well as on the effect of solvent extraction and de-waxing processes. The paraffinic base oils viscosity index is generally greater than 95 and the pour point is relatively high.
The viscosity index is as higher as stricter is aromatic extraction. It is also possible to increase the index by decreasing the de-waxing strictness but in this case there will be a worsening of the low temperature property.
Naphthenic base oils
Naphthenic base oils are produced from a few crudes (typically from Venezuela) and are currently used in a few applications where low-temperature properties are required and the viscosity index is less important.
These base oils have better solvent power, but low resistance to oxidation than paraffinic ones. Generally, they are also characterised by a low viscosity index (between 40 and 80) and a relatively low pour point due to the absence of paraffins.
Most of the synthetic bases has both higher VI and flash points but lower pour points compared to mineral ones. On this basis, these oils are particularly useful in extreme temperature and pressure conditions.
The synthetic bases such as polyalphaolefins (PAO), alkylated aromatics, esters, polyglycols, polybutenes and polyinternalolefins (PIO) are widely used in lubricants industry.
Polyalphaolefins show very good characteristics when operating at cold temperatures thanks to the high branching and volatility degree. However in some oxidation tests they appear less resistant than mineral bases (in absence of additives). This behaviour is due to the absence of natural antioxidants, present in the mineral oils. PAOs are less polar and thus they have low solvent power (solvency). This comes at the expense of ability to solubilise the polar additives present in the lubricating oil and the oxidation products (rubbers) formed during the exercise. The wide range of temperatures where PAO can work, together with the excellent chemical and physical characteristics, allows their use in various application areas.
The alkylbenzenes have lower characteristics if compared to PAOs but are used in refrigerant oils thanks to their excellent solubility and low pour point.
Generally, they have high viscosity index which make them particularly suitable to obtain lubricating oils for the transmissions but they have a low oxidation resistance.
Polybutenes are cut-resistant polymers and are used as Viscosity Index Improver (VII). They have higher volatility but lower resistance to oxidation and lower viscosity compared to PAOs and esters. In synthetic lubricants, polybutenes are usually combined with esters and PAOs and may affect the control of the lubricant viscosity, arising low deposit’s formation and thickening.
The most immediate effect of the ester group on lubricant properties is a lower volatility and an increased flash point. The esters influence other properties such as thermal stability, the solvent power, the lubricity, the biodegradability.Poly internal olefins (PIO) PIOs are characterized by high viscosity index, excellent rheological behaviour at low and high temperature, low volatility and good thermal-oxidative behaviour. They are employed as lubricants for internal combustion engines or industrial machineries.
Non conventional base oils (NCBO) are produced from vacuum cuts treated through hydrogen-processes. The two main processes are hydrocracking and waxes hydro-isomerization. NCBOs offer two important advantages: hydrogen-processes can replace solvent extraction, reducing the dependence on crude origin and they ensure high quality base oils (better than conventional ones) due to a lower volatility, higher viscosity index, better temperature stability and lower sulphur content.
The re-refined bases are produced by re-processing exhausted oils which cannot be lost in the environment but must by low collected into authorized centres from where they can be sent to controlled combustion plants or re-refined.
The re-refining processes, which consist on a treatment for removing volatile and insoluble components and additives, are able to produce lubricant bases with the same characteristics of mineral bases.
Re-refining yields allow to obtain for every 100 kg of exhausted oil about 60 kg of re-refined oil.
The treatment ends with a hydrogen treatment which eliminates or reduces the content of polynuclear aromatics (PNA), carcinogenic agents.
Lubricating base oils are classified according to the physical characteristics and / or production process. The API (American Petroleum Institute) classifies base oils into five groups .
Group I - These oils are usually processed with solvents and they have a good degree of solvency, but they are most vulnerable to oxidation and thermal degradation compared to oils processed in different manner. The oils of Group I are used in almost all applications in the automotive and industrial field and are important for the formulation of lubricating greases.
Group II - Oils subjected to mild hydrocracking and catalytic de-waxing. They have high saturation levels, and good performance in terms of thermal and oxidation stability. These oils are used in a large range of automotive and industrial applications.
Group III - Typically subjected to severe hydrocracking, advanced catalytic de-waxing, and / or hydro-isomerization, they have high viscosity indexes and very good thermal and oxidation stability. They are used primarily in the automotive sector.
Group IV - Oils produced synthetically. The main characteristics relate to low pour points, high viscosity indexes, excellent thermal stability and excellent oxidation stability. These oils are used primarily in the automotive industry, such as high-quality motor oils and transmission oils.
Group V - This group includes base oils which are not present in other groups such as naphthenic, esters and polyglycols.
The development of lubricants is traditionally based on mineral oils due to good technical properties and reasonable price of mineral oil. A disadvantage of mineral oil is its poor biodegradability which may cause environmental pollution.
Consequently, the research has evolved in the field of synthetic esters used as lubricants, exploiting renewable resources for the production of fatty acids.
In this way, the lubricants are sustainable and biodegradable. The physic-chemical properties of esters are able to cover the entire range of technical requirements for the industrial lubricant development, ensuring high performances.
Experimental studies performed on synthetic esters have been done on different types of formulations, meanly in lubricants based on saturated and unsaturated esters .
The oxidation stability of saturated ester bases is higher than the one of unsaturated esters. Particularly for rapeseed oil the oxidation stability of saturated esters can be compared to the one of mineral oil bases. Esters exhibit less friction than mineral oil.
In many industrial applications the technological advancement is strongly linked to innovation in the field of lubricants. For this reason, important efforts are made in order to improve their quality. The objective is twofold: on one hand the duration increasing and the friction reduction; on the other, the reduction of environmental impact due to the use of fossil lubricants.
To meet these challenges researches in the use of ionic liquids as new generation of lubricants are ongoing.
These new systems show a significant improvement in wear and friction. Ionic liquids consist of large molecules, asymmetric organic cations and an inorganic anions. The large size induce widespread charges and reduced electrostatic forces among anion so much to rarely form a regular crystal structure and they may be liquid at room temperature. Ionic liquids have different properties that make them suitable as potential lubricants. Their low volatility, low flammability and thermal stability allows to safely absorb the increase in temperatures and pressures that occur when there is high friction .
Another significant advantage is the variety of usable anions and cations, they estimate at least one million of possible combinations, each one with its specific properties . This means that ionic liquids can be made specifically for particular applications with high flexibility. For example the specific tasks may concern the absorption on a surface, a particular reaction, miscibility in a base oil etc.
For well-known lubrication systems, such as steel to steel, as well as for difficult lubricating systems such as steel to aluminium, ionic liquids have been shown to have better performance than available commercial lubricants. However ionic liquids are currently more expensive than conventional lubricants, so they may be limited to niche applications. For these reasons, actually ionic liquids are promising as lubricant additives, where it is possible a more widespread use.Numerous nanoparticles used as additives were explored in recent years. The results are very encouraging and show an overall improvement in performance in terms of friction and wear even with concentrations less than 2% (weight). In particular, some particles as CuO, ZnO and ZrO2 showed better performance when compared to the normal additives .
Global energy demand has dramatically increased in last years and most of the energy needs of the world today (>80%) is still covered by conventional fossil fuels such as coal, petroleum and natural gas (Table 1). The issues of energy efficiency in the fuel production/combustion and storage depletion as well as the increasing concerns about climate changes and environmental pollution related to conventional fuels, are driving the industrial R&D towards the development of alternative solutions. On this basis, this brief review focuses on the most recent strategies in the field of alternative fuel solutions with a specific insight into the low emission strategies of the automotive industry.
The emissions of the conventional fuel combustion are characterized mainly by the presence of carbon monoxide (CO), nitrogen oxides (NOx), Sulfur oxide (SOx), hydrocarbons and Particulate matter (PM). NOx are harmful to human health and act as precursor of tropospheric ozone. Acute CO poisoning can lead to high toxicity of the central nervous system and heart while a chronic exposure causes depression, confusion and memory loss. Carbon monoxide poisoning mainly causes hypoxia by combining with hemoglobin to form carboxyhemoglobin in the blood reducing the oxygen-carrying capacity of the blood. An exposition to more than 20 ppm SO2 can cause death; moreover SOx pollution strongly affects the life of entire ecosystems for the climate influence. Any recent medical research suggests that PM is among the most dangerous pollutants; the effects of its inhalation (both acute and chronic) is nowadays associated with the majority of respiratory diseases, from asthma to lung cancer and also to cardiopulmonary mortality, premature delivery, birth defects, and premature death. Besides these strong environmental impacts, any conventional fuel contributes to generate greenhouse gas emissions causing the well-known climate changes. Wondering what happens when oils will runs out, Prof. Chris Rhodes asserts that, although the world supply crude oil isn’t going to run out any time soon, it is impossible to follow the current production rate: “ from 1965 to 2005, we see that by the end of it, humanity was using two and a half times as much oil, twice as much coal and three times as much natural gas, as at the start, and overall, around three times as much energy: this for a population that had “only” doubled. Hence our individual average carbon footprint had increased substantially – not, of course, that this increase in the use of energy, and all else, was by any means equally distributed across the globe”. Following the Kyoto Protocol and the subsequent national directives, the industrialized countries are setting stringer regulation policies of emission control for stationary and mobile sources. The main strategies for the development of low-emission vehicles (LEV) are the realization of alternative low-emission fuels for the conventional internal combustion engine vehicle (ICEV) and the development of new high-tech renewable LEV as hybrid and fuel cell vehicles (Fig. 1). Although these latter are making promising step-forward towards the commercialization, they have not still led to a considerable market because of economic, politic and technological barriers. This results in the persistence of ICEV as dominant design. Therefore, this design is the focus of recent R&D effort in order to decrease polluting emissions and to increase energy efficiency of engines (by developing injection, combustion chambers and ignition controlling technologies) as well as by ideating alternative (and more environmental friendly) fuels.
The main pollutants in diesel emission, NOx and PM, have peculiar mechanisms of formation hindering the simultaneous reduction of both and making necessary a trade-off between the two possible pollutant emissions. Lowering the combustion flame temperature in order to reduce NOx generally causes disequilibrium in the balance of soot formation and burnout resulting in an increase in PM emissions. On the other hand, particulate emissions can be reduced by increasing the combustion temperature, an operation that results in increased NOx emissions.
One way to overcome this issue is to replace the fuel with emulsions of diesel oil and water (without retrofitting the engine system). Lif and Holmberg gave an extended review of the water in diesel related system. Water emulsions in oil (W/O) are prepared by using surfactants through mechanical (and ultrasonic), chemical, or electric homogenizing machine (i.g. water stirring into microdroplets in oil layers). Surfactants, thanks to the presence of both lipophilic and hydrophilic groups, can reduce the oil and water surface tension creating oil-in-water or water-in-oil two phase emulsions (layer of ionic surfactants can also prevent droplet merging). When the emulsion is heated, water droplets vaporizes breaking out the oil layer (microexplosion). The secondary atomization increases the superficial area of the fuels and air and mixing extent. Secondly, the presence of water dilutes the nuclei of soot growth limiting the soot growth rates. Moreover, the presence of water could enhance soot burnout by increasing the presence of oxidizing species. All these cited aspects can contribute to lower the PM emission inhibiting both soot and ash formation. On the other hand, the high latent heat of vaporization of water will act to lower the temperature causing the NOx emission reduction. Nadeem et al. compared the engine and emission performances of emulsified fuels (5–15% of water) using conventional (CS) and gemini surfactants (GS). Their experimental results highlights the potentiality of W/O to significantly reduce the formation of thermal NOx (from more than 700 to 500 ppm), CO, SOx, soot, hydrocarbons and PM (more than 70% of reduction) in the Diesel engines.
Conventional techniques for desulfurization of transportation fuels are based on hydro–desulfurization (HDS), in which the sulfur in the fuel is removed as H2S. This technique has the drawbacks of limited efficiency for the low reactivity of benzothiophene and dibenzothiophene and high costs for the operating conditions and the hydrogen implementation. On the other hand, the oxidative desulfurization is based on the conversion of non-polar aromatic hydrocarbons containing sulfur to corresponding sulfones, easily extractable with methanol. This liquid–liquid heterogeneous system, which depends on the mass transfer between the interface, can be enhanced through cavitation, both Ultrasonic and Hydrodynamic. Cavitation is the nucleation, growth, and transient collapse of micrometric gas-vapor bubbles driven by a pressure variation. It induces physical and chemical effects in the reaction system that enhance the kinetics and yield of the process (both mechanical and chemical). The chemical effects are in terms of the generation of radicals through the dissociation of gas and vapor molecules during the transient collapse of the cavitation bubbles. The physical effects in terms of turbulence generation and therefore viscous dissipative eddies, shock waves and microjets can be exploited to create emulsion by reducing the mass transfer limitations. A brief description of ultrasonic desulfurization is given by the Sulphco, Inc., a Nevada corporation; it reported great results in the enhancement of fuel desulfurization showing an impressive translation to sulfone concentration through an innovative treatment with ultrasonic horns. Several innovative application of cavitation desulfurization, from patents to applied research and technology development, appeared in the literature. While in the acoustic cavitation, the pressure variation is given by ultrasonic waves, in the hydrodynamic one, it is realized through properly designed flow restriction operating at different pressures and flow rates. An example of the bubble dimensions and shear stresses at the collapse stage is shown in Figure 3.
The generally shared belief that the upcoming shortage of oil will accelerate the switch to alternative fuels, all the major oil and automotive companies have alternative fuels research programs. Moreover, the R&D in alternative fuels is often related to environmental friendly strategies. The term alternative fuels comprises hydrogen, compressed natural gas (CNG) and liquefied petroleum gas (LPG), biogas, dimethylether (DME), alcohols such as methanol and ethanol, liquefied petroleum gas (LPG), vegetable oils and fatty acid methyl esters, and blends of these with gasoline or diesel. Therefore, there are different opinions and an ultimate decision about which type of products will dominate the market for vehicle fuels in the future is uncertain and depends on political as well as economic considerations. As visible from the Figure 4, indeed the cost of alternative fuels (ethanol produced from corn in the U.S.) often follows the cost of the equivalent conventional one which is gasoline, the principal market competitor (and rarely is strictly connected to the prices of raw matherials). Generally, the fuels generation can follow the pathway of Natural gas, Biomasses or Electricity. Natural gas is a versatile fuel, employable in modified spark-ignition engines or in dedicated engines. It can be used directly in compressed or liquefied form and converted to methanol, dimethyl ether (DME), gas-to-liquid (GTL) fuel or Fischer–Tropsch diesel. Both PM and NOx emissions from natural gas-derived fuels are very low while sulphur emissions is usually negligible. Liquefied petroleum gas (LPG) is mainly composed by propane and butane (and homologues liquefying at ~800 kPa) and released during the extraction of crude oil and gases of oil refining processes. LPG fuels are based on light low-carbon, clean-burning hydrocarbons and their implementation can bring to substantial reductions of CO, NOx, hydrocarbons and emissions of greenhouse gases. DME (born as an ignition improver of methanol) can be produced from different feedstock such as natural gas, coal, oil residues and biomasses. It has good ignition properties (high cetane number and low auto-ignition temperature); moreover its simple chemical structure and high oxygen content result in soot-free combustion in engines.
Arcoumanis et al. gave a review of the potential benefits of using DME as alternative fuel in standard compression-ignition engines with slight modification of the conventional system (paying attention to the corrosion and low lubricity related issues). Hydrogen can be used as a fuel in internal combustion engines and in fuel cells with zero pollutant emission. It can be produced from natural gas as well as water electrolysis. From the economic point of view, its utilization is controlled by the cost and the source of electrical energy. Toyota Mirai, the first commercialized fuel cell car, is recently finding a great success highlighting that the spreading of such kind of technologies is limited only by infrastructural issues: distribution chain, storage and handling (both in vehicles and at gas stations). Although these issues are not yet overcome, hydrogen represents a concrete frontier for the automotive industry. The biomass for fuel production can have various origins, such as black liqueur, forestry residues, or municipal or industrial waste products. The resulting fuels are. Among all the different biomass based fuels, the most accessible ones today are diesel and ethanol. Other resulting fuels are methanol, DME and Fischer-Tropsch diesels while the gasification processes of biomass results in biogas-to-liquid fuels. Biodiesel is conventionally made by transesterification of a triglyceride with methanol (fatty acid methyl ester). It can be used either pure or as blends with regular diesel with the benefit of reduced CO, CO2 hydrocarbon and PM emissions. Biodiesel combustion produces higher NOx emission (to be treated with improved catalytic filters) while reduces the SOx emission to almost zero. Rape seed and sunflower are among the main source of biodiesel edible raw material. To minimize the reliance on edible vegetable oil and to exploit the naturally available oil plants, Ashraful et al. studied the fuel properties, engine performance, and emission characteristics of biodiesel from various non-edible vegetable oils (karanja, mohua, rubber seed, and tobacco biodiesel) providing a detailed extensive review in this field. Based on their findings (reduce CO, HC and smoke emission) they asserts that non-edible oils have the potential to replace edible oil-based biodiesels in the near future (some controversy arise from the NOx point of view). In 2013 the total biodiesel production was 6,948 millions of gallons with an increase of the 17% from 2012 to 2013. In 2013 the United States led the world in biodiesel production, followed by Germany, Brazil, Argentina, and France and Indonesia (U.S. cost of 3.92 $/gall in 2013). Because of the reduction of PM emission in oxygenated fuels, the alcohols are particularly attractive as alternative to the conventional ones. Gravalos et al. described the Performance and Emission Characteristics of Spark Ignition Engine Fuelled with Ethanol and Methanol Gasoline Blended Fuels highlighting the mixture properties (reported in Table 2). Moreover, they can be produced as biofuels (also not linked to the food production). In 2013, the Indian River BioEnergy Center began producing cellulosic ethanol at commercial volumes for the first time and now is among the major technology center in the field of bioenergy. Its goal is to << take wastes and sustainably turn them into advanced biofuel and renewable power>>. Methanol can be produced from coal, biomass or even natural gas while ethanol mainly from sugar cane, starch wheat or wine. All car manufactures have approved the use of E10, a blend of 10% ethanol and 90% gasoline and E5, blend of 5% ethanol and 95% gasoline in the ordinary gasoline cars and these blends are commonly available in the US and in Europe. In Brazil the majority of the cars utilizes neat ethanol or lower level blends produced from sugar cane while in the U.S. the ethanol production (13,300 million gallons in 2013,) is mainly based on corn. In 2013 the U.S. led the world market (57% of the overall production) followed by Brazil at 27% and E.U. at 6% (see Figure 3). To understand the order of magnitude of the number reported above, Figure 5 shows the data (taken from the U.S. Department of Energy report) related to the consumption of renewable and alternative fuel (top) with a comparison to the consumption of traditional fuel (bottom) in the United States (for the year 2013).
Many efforts have been made to move from today’s fossil based economy to a more sustainable economy based on biomass. The reasons can be summarized as follow:
Current global bio-based chemical and polymer production (excluding biofuels) is estimated to be around 50 million tonnes . Examples of bio-based chemicals include non-food starch, cellulose fibres and cellulose derivatives, tall oils, fatty acids and fermentation products such as ethanol and citric acid. However, the majority of organic chemicals and polymers are still derived from fossil based feedstocks, predominantly oil and gas.
Recently, the consumer demand for environmentally friendly products, the population growth and limited supplies of non-renewable resources have opened new opportunities for bio-based chemicals and polymers.
Bio-based goods can be produced in single product processes or in an integrated biorefinery processes producing both bio-based products and secondary energy carriers (fuels, power, heat), in analogy with oil refineries , .
Actually, the main driver for the development and implementation of biorefinery processes is the transportation sector. Significant amounts of renewable fuels are necessary in the short and midterm to meet policy regulations both in- and outside Europe.
A very promising approach to reduce biofuel production costs is to use so called biofuel-driven biorefineries for the co-production of both value-added products (chemicals, materials, food, feed) and biofuels from biomass resources in a very efficient integrated approach.
From an overall point of view, a key factor in the realisation of a successful bio-based economy will be the development of biorefinery systems that are well integrated into the existing infrastructure.
At the global scale, the production of bio-based chemicals could generate US$ 10-15 billion of revenue for the global chemical industry .
Biorefineries can be classified mainly on the feedstocks used to produce bio-based goods (see figure 1). Major feedstocks are perennial grasses, starch crops (e.g. wheat and maize), sugar crops (e.g. beet and cane), lignocellulosic crops (e.g. managed forest, short rotation coppice, switchgrass), lignocellulosic residues (e.g. stover and straw), oil crops (e.g. palm and oilseed rape), aquatic biomass (e.g. algae and seaweeds), and organic residues (e.g. industrial, commercial and post consumer waste). These feedstocks can be processed in different unit of a biorefinery, called platforms. The platforms include single carbon molecules such as biogas and syngas, 5 and 6 carbon carbohydrates from starch, sucrose or cellulose; a mixed 5 and 6 carbon carbohydrates stream derived from hemicelluloses, lignin, oils (plant-based or algal), organic solutions from grasses, pyrolytic liquids. These primary platforms can be converted to wide range of marketable products using combinations of thermal, biological and chemical processes.
Actually, biogas production is mainly based on the anaerobic digestion (see figure 2) of “high moisture content biomass” such as manure, waste streams from food processing plants or waste from municipal effluent treatment systems. Biogas production from energy crops will also increase and will have to be based on a wide range of crops that are grown in versatile, sustainable crop rotations. Biogas production can be part of sustainable biofuels-based biorefineries as it can derive value from wet streams. This value can be increased by optimizing methane yield and economic efficiency of biogas production  and deriving nutrient value from the digestate streams .
Sugar platforms can implements processes to degrade sucrose in glucose or to hydrolyse starch or cellulose in glucose. Glucose serves as feedstock for fermentation processes to give a variety of important chemical building blocks.
The hydrolysis of hemicelluloses and then the fermentation of these resulted carbohydrate streams can in theory produce the same products as six carbon sugar streams; however, technical, biological and economic barriers need to be overcome before these opportunities can be exploited. Chemical manipulation of these streams can provide a range of useful molecules (see figure 3).
Indeed, by selective dehydration, hydrogenation and oxidation reactions it is possible to obtain useful products, such as: sorbitol, furfural, glucaric acid, hydroxymethylfurfural (HMF), and levulinic acid. Over 1 million tonnes of sorbitol is produced per year as a food ingredient, personal care ingredient (e.g. toothpaste), and for industrial use , .
Global oil production in 2009 amounted to 7.7 million tones of fatty acids and 2.0 million tonnes of fatty alcohols . The majority of fatty acid derivatives are used as surface active agents in soaps, detergents and personal care products .
Major sources for these oils are coconut, palm and palm kernel oil, which are rich in C12–C18 saturated and monounsaturated fatty acids. Rapeseed oil, high in oleic acid, is a favoured source for biolubricants. Commercialized bifunctional building blocks for bio-based plastics include sebacic acid and 11-aminoundecanoic acid, both from castor oil, and azelaic acid derived from oleic acid. Dimerized fatty acids are primarily used for polyamide resins and polyamide hot melt adhesives.
Biodiesel production has increased significantly in recent years with a large percentage being derived from palm, rapeseed and soy oils. In 2009 biodiesel production was around 14 million tonnes; this quantity of biodiesel co-produces around 1.4 million tonnes of glycerol.
Glycerol is an important co-product of fatty acid/alcohol production. The glycerol market demand in 2009 was 1.8 million tonnes . Glycerol is also an important co-product of fatty acid methyl ester (FAME) biodiesel production. It can be purified and sold for a variety of uses .
Algae biomass can be a sustainable renewable resource for chemicals and energy. The major advantages of using microalgae as renewable resource are:
Microalgae can contain a high protein content, with all 20 amino acids present. Carbohydrates are also present and some species are rich in storage and functional lipids. Other valuable compounds include: pigments, antioxidants, fatty acids, vitamins, anti-fungal, -microbial, -viral toxins, and sterols.
Until now, the lignin platforms are mainly based on lignosulfonates (see figure 4). These sulfonates are separated from acid sulfite pulping and are used in a wide range of lower value applications. Major end-use markets include construction, mining, animal feeds and agriculture uses.
Besides lignosulfonates, Kraft lignin is produced as commercial product at about 60kton/y. New extraction technologies, will lead to an increase in Kraft lignin production at the mill side for use as external energy source and for the production of value added applications .
The production of bioethanol from lignocellulosic feedstocks could result in new forms of higher quality lignin becoming available for chemical applications. The production of more value added chemicals from lignin (e.g. resins, composites and polymers, aromatic compounds, carbon fibres) is viewed as a medium to long term opportunity which depends on the quality and functionality of the lignin that can be obtained .
Bio-PE:Biorenewable Polyethylene; Bio-PET: Biorenewable Polyethylene Thereftalate; PLA: Polylactic Acid; PHA: Polyhydroxy Alchanoates; BP: Biodegradable Polyesters; BSB: Biodegradable Starch Blends; Bio-PVC: Biorenewable Polyvinyl chloride ; RC: Regenerated Cellulose; PLA-B: Polylactic Acid Blends; Bio-PP: Biorenewable Polypropylene; Bio-PC: Biorenewable Polycarbonate.
An international study14 found that with favourable market conditions the production of bulk chemicals from renewable resources could reach 113 million tonnes by 2050, representing 38% of all organic chemical production. Under more conservative market conditions the market could still be a significant 26 million tonnes representing 17.5% of organic chemical production (see figure 5).
Currently, commercialised bio-polymers (i.e. PLA, PHA, thermoplastic starch) are demonstrating strong market growth. Market analysis shows growth per annum to be in the 10-30% range , , .
Bio-based polymer markets are dominated by biodegradable food packaging and food service applications. It can be rationalised that the production of more stable, stronger and longer lasting biopolymers will lead to CO2 being sequestered for longer periods and leads to recycling rather than composting where the carbon is released very quickly without any energy benefits5.
Between the most important players in biorefining, there are Novamont (Italy) leader on biodegradable bags based on Mater-Bi (bioplastic derived from thermoplastic starch); NatureWorks (U.S.A) leader in the PolyLacticAcid production (a biobased plastic used also for the production of biodegradable bottles) and Biochemtex belongs to M&G Chemicals Group (Italy) specialized in the production of bioethanol of second generation.____________
Olefins, mainly ethylene (C2H4) and propylene (C3H6), are key intermediate and feedstock for the production of a wide number of chemical products, as the polyolefins (polyethylene – PE, polypropylene – PP), Mono-ethylene glycol (MEG), Ethylene Oxide (EO) and derivatives, Propylene Oxide (PO) and derivatives, Polyvinyl chloride (PVC), ethylene dichloride (EDC), Styrene, Acrylonitrile, Cumene, Acetic Acid, etc.
At the present, the worldwide demand of ethylene/propylene is more than 200 million tons per year but the conventional processes suffer for a series of problems as the high cost and low conversion efficiency.
In the following, the traditional technologies, i.e. the Thermal Steam Cracking and the Fluid Catalytic Cracking, are firstly presented. Then the innovation in the olefins production are described and assessed.
TSC is a thermal process by which a feedstock, typically composed by naphtha, ethane or propane, is heated up in a furnace composed by both a convection and radiant section, and mixed with steam to reduce the coke formation. The steam addition depends on the TSC feedstock (from 0.2 kg steam to kg of hydrocarbon for ethane to 0.8 kg steam to kg of hydrocarbon for naphtha).
Then the products (ethylene, propylene, butadiene, hydrogen) are quickly cooled down to avoid subsequent reactions (quenching) and then are separated by means of a series of operations (refer to Figure 1).
The reactions structure involved in thermal cracking is complex and, generally, is based on a free radical mechanism. Basically, two types of reactions are supported in a thermal cracking process:
TSC is an energy intensive process: the specific energy consumption per kg of produced olefin is 3.050 kcal/kg.
FCC is a multi-component catalytic system, where the catalyst pellets are “fluidized” thanks to the inlet steam flow-rate and the cracking process is supported at lower temperature than TSC. A typical block diagram is shown in Figure 2, while a FCC reactor drawing is reported in Figure 3.
Olefins production traditional technologies suffer from inefficiency due to high temperature/high energy costs, complex and expensive separation units and significant CO2 emissions.
As a consequence, a strong interest towards the development of more flexible, more efficient with a lower environmental impact and less expensive catalytic olefin production technologies is growing.
In the following, some of the most interesting technologies developed during the last years are presented and described.
The Advanced Catalytic Olefins (ACOTM) technology has been developed by Kellogg Brown & Root LLC (KBR) and SK Innovation Global Technology. The process is an FCC-type with an improved catalyst able to convert the feedstock in larger quantities of ethylene and propylene, with a higher share of propylene than conventional processes (the ratio of produced propylene to produced ethylene is 1 versus 0.7 of the commercial processes). The ACO process produces 10-25% more olefins than the traditional FCC processes, with a reduction of consumed energy per unit of olefins by 7-10% .
The plant configuration is composed of 4 sections: riser/reactor, disengager, stripper and regenerator. Figure 4 shows a simplified process scheme, while Figure 5 illustrates a picture of the first ACO commercial demonstration unit, installed in South Korea and with a production capacity of 40 kta of olefins.
The Propylene Catalytic Cracking is a fluid solids naphtha cracking process patented by Exxon Mobil and based on an optimization of catalyst, reactor design and operating conditions set able to modulate the reactions selectivity, leading to crucial economic benefits in comparison with the conventional processes.
The PCC process is able to produce directly the propylene at the chemical grade concentrations, thus avoiding the expensive fractionation units. Moreover, the specific operating conditions allows the minimization of aromatics production .
Exxon is testing the innovative solutions on tailored pilot facilities.Indmax FCC Process
The Indmax process, developed by the Indian Oil Corporation, is able to convert heavy feedstock to light olefins. It is a FCC-type process where the reactions are supported by a patented catalyst, able to reduce the contact time and thus leading to higher selectivity to light olefins (ethylene and propylene).
Another crucial characteristic of I-FCC process is the high production flexibility: the process can be easily adjusted to modulate the output, maximizing propylene, gasoline or producing combinations (propylene and ethylene or propylene and gasoline) .Aither Chemicals’ catalytic process
Aither Chemicals, a company located in the U.S., developed an innovative catalytic cracking process for the production of ethylene, acetic acid, ethylene derivatives as ethylene oxide (EO) and ethylene glycol (EG), polyethylene (PE, LLDPE, HDPE), acetic acid derivatives as acetic anhydride, ethylene-acetic-acid derivatives such as vinyl acetate monomer (VAM), ethyl vinyl acetate (EVA) and other chemicals and plastics . The process uses oxygen instead of water steam and, globally, needs much lower energy (-80%) and produces 90% less carbon dioxide, being more environmentally sustainable.
Moreover, the CO2 and CO streams are captured at the outlet of the catalytic process and utilized for producing chemicals and polymer, thus nullifying the GreenHouse Gases emissions.
The production volumes foreseen for the innovative process are 224 ktons of ethylene, 112 ktons of acetic acid, 30 ktons of CO2 and 15 ktons of CO.Methane-to-olefins processes
Many research efforts are devoted to find new routes and process configurations to convert directly natural gas to olefins by low temperature reactors.
There are two possible methane-to-olefins (MTO) processes:
Even if the direct route seems to be more interesting, at the present not a good light olefins selectivity has been obtained  and the MTO processes are more energy intensive than the conventional cracking technologies. The only pre-commercial scale application has been developed by UOP and Total Petrochemicals in Feluy (Belgium): the plan is an indirect process able to produce ethylene and propylene through methanol and syngas.
The company UOP developed an innovative Propane Dehydrogenation (PDH) process able to produce ethylene and propylene at lower cost thanks to a lower energy usage and a more stable platinum-based catalyst . The process, called Oleflex, is divided in three sections: the reaction, consisting of four radial-flow reactors, the product purification and the catalyst regeneration. Fig. 7 shows a process layout. Currently, 6 Oleflex units are installed and produce more than 1.250.000 MTA of propylene worldwide.
The Shell Higher Olefins Process (SHOP) is an innovative olefins production technology, developed by Royal Dutch Shell, based on a homogeneous catalyst and used for production of linear α – olefins (from C4 to C40) and internal olefins from ethene.The process architecture consists of three steps:
At the present, SHOP is widely applied and the worldwide production capacity is 1.190.000 t of linear alpha and internal olefins per year.Catalytic Partial Oxidation of ethane ENI and the Italian research centre CNR developed an ethylene production process through Short Contact Time – Catalytic Partial Oxidation (CPO) of ethane. The process is supported by a patented monolithic catalyst able to improve the ethylene yield up to 55 wt.% .
At the present, the technology has been validated trough a bench-scale unit, by which the optimal operating conditions have been identified. However, the industrial scale application is not ready yet, since an optimization of the CPO reactor design and the improvement of the catalyst reliability are needed._______________
The gasification process is the thermochemical conversion of a carbonaceous solid or liquid to a gas in presence of a gasifying agent: air, oxygen or steam. Compared to this definition, the combustion process could be associated as a gasification one, however, by definition, gasification requires that oxygen supply is lower than the amount required for complete combustion to carbon dioxide and water (the stoichiometric amount). In these conditions, the reaction products are not only carbon dioxide and water but consist of a combustible gas mixture with a given heating value which depends on three variables: feed elemental composition, inlet gas composition (air, oxygen or steam) and gasifier typology. Furthermore the process produces a solid carbonaceous phase (CHAR), condensable vapors (TAR) and ashes.
The gasification can be carried out directly by adding oxygen (or air) and by exploiting the exothermicity of the reactions to provide the energy necessary for the process or by pyrolysis, supplying heat from outside in the complete absence of oxygen. The gaseous products, essentially hydrogen, carbon monoxide, methane and carbon dioxide, may be used for several purposes such as heating, electricity generation and production of chemicals and fuels.
The gasification process, has been developed on an industrial scale during the 19th century to produce town gas for lighting and cooking. Later, the natural gas and electricity replaced it for these applications, and it was used only for the production of some synthetic chemicals. Since the '70s, following the crisis of fossil fuels, the realization of dependence on foreign oil have led to the revaluation of the gasification process, in particular the biomass gasification, driven also by the interest in the reduction of greenhouse gas emissions and in the local availability of renewable energy sources.
The gasification process can be divided into 4 basic steps (sketched in Figure 1) that occur within a suitable reactor: heating/drying, pyrolysis, gas-solids reactions and gas phase reactions . When the reactor design ensures high-speed heat transfer and the feed is introduced as small particles, the whole process takes place in short time (about one second) .
Heating and drying: in this first step the temperature reaches about 300°C and the feed is completely dried. The greater the moisture amount, the higher the energy needed for drying, with a lower produced gases enthalpy. For this reason, a naturally dry (or previously dried) biomass is desirable. During the heating there is a typical heat transfer phenomenon, with a temperature profile decreasing towards the particle centre: the greater the radius, the longer the time required for the treatment.
Pyrolysis: in this second step, a rapid thermal anoxic degradation of the carbonaceous material takes place. The ideal temperature for this purpose is between 400 and 500°C.Released products: Gases: H2, CO, CH4, CO2 and some other light hydrocarbons.
Vapors: The exposition to high temperatures lead to a thermal cracking process generating light and condensable compounds (TAR, Topping Atmospheric Residue) consisting essentially in polyaromatic hydrocarbons.
Solids: residual porous called CHAR consisting in a carbon residue and inorganic compounds (ash).
Gas-Solid Reactions: reactions occurring between CHAR and the added gasifying agent (oxygen, steam, or both). Exothermic reactions, with negative , help to provide energy for the endothermic processes such as drying and pyrolysis.
Gas-phase Reactions: there are two main gas-phase reactions, respectively, water gas shift and methanation, for the synthetic natural gas production.
Depending on the modality of contact between the gasifying agent and the charge, four reactor types can be identified:
The Fixed Bed Gasifiers represent the most consolidated technology thanks to their constructional simplicity, although some difficulties to maintain a uniform temperature along the reactor may arise. These latter involve a series of problems due both to the control system and the quality of the produced syngas. The fixed bed gasifiers are generally used for small-medium size plants (no more than 10-15 tons/hours of biomass). The scaling up to higher potential is very complex because of the impossibility of having a uniform temperature distribution in great size beds.
Depending on the point of product gas intake, different geometries can be classified:
When the air velocity is increased above these values, there is a particles entrainment, which makes necessary the installation of a cyclone for the reintroduction of the solid particles inside the reactor. This configuration is called Circulating Fluidized Bed (CFB).
In air(or oxygen)-fed fluidized bed reactors, the syngas methane content is relatively low because the reactor operates as an high temperature autothermal reformer.
The Entrained Flow Gasifiers accept gaseous, pulverized or slurry feeds. The fuel is fed inside burners in co-current with oxygen and eventually steam. In case of biomass as feed, this must be pulverized or submitted to a preliminary pyrolysis step. The gasification process takes place at temperatures about 1200°C and pressures above 20 bar. These operating conditions lead to a non-leachable molten slag and a very low TAR content syngas production with consequent simplification of the downstream purifying operations. The high operating pressure results in the production of a compressed syngas that can be used directly in synthesis reactions. The high temperature makes necessary an heat recovery from the gases through the coupling with steam and electricity production, in this way it is reached an important improvement in the process efficiency.
In the Indirect Gasifiers, gasification occurs in absence of oxygen therefore without feed combustion. For this reason, the heat required by endothermic reactions must be supplied from outside with steam as gasifying agent. In this configuration, the additional heat can obtained by exploiting an external source or by burning a part of the feed in a separated combustion chamber. The necessary heat amount can be supplied in different ways:
Both the equilibrium thermodynamic laws and experimental data prove that, using steam as gasifying agent rather than air or oxygen at temperatures in the range of 800-900°C, the methane content grows significantly.
The great and obvious potential of gasification process is mainly linked to the use of syngas for the production of chemicals such as methanol and fertilizers. Additionally, in some cases the gasification can have the same purpose (i.g. heat and electricity generation) and the same feed typology of incineration process with benefits mainly related to environmental and economic aspects. The gasification of solid fuels normally used for power production (coal, MSW etc.) allows a considerable pollutants reduction such as SOx, NOx and Hg as well as CO2, which is a major cause of global warming. As regards the CO2, some studies have been performed to compare the gasification-based power plant emission with a combustion-based subcritical pulverized coal plant . The obtained results show how the use of gasification slightly reduces the CO2toEnergy-ratio (745 g/KWh against 770 g/KWh) but an important advantage lies in an easier CO2 capture, being more concentrated in the exhaust gas. On the other hand the gasification allows an easier sulphur and nitrogen removal. In fact, while with the combustion there is the formation of SOx end NOx which are relatively difficult to remove, gasification produces different substances: the 93-96% of the sulphur is transformed into H2S and the remaining in COS , while nitrogen forms N2 and NH3 that is removed during syngas cleaning. The H2S can be removed by absorption producing elemental sulphur as a valuable by-product, saleable to fertilizers companies. Furthermore, inside the gasifiers dioxins and furans formation is unflavoured and it is possible a significant particular matter reduction with proper treatment. Unlike ash produced with the incineration process, with gasification the slag can be used in roads bed construction.
Table 1 shows how gasification process approaches natural gas emissions.
The Gasification Technologies Council has been realized some important researches to analyse the gasification plants industrial development, which are summarised in graphs available at: www.gasification.org.
Some of them are listed below (Figure 6, 7, 8). By looking at the global market, the gasification in Asia/Australia exceeds the amount related to the other continents put together due to the important growth of chemical, fertilizer and coal to liquids industries in Asia (Figure 6). On the other hand, the countries with large natural gas reserves invest less in this technology. For example in Russia gasification plants are not currently present, while China represents the most relevant investor in this field with the highest number of gasification plants (Figure 7). In conclusion, Figure 8 shows clearly that coal represents the present as well the future of gasifier feedstock.