SCAN′2012September 23–29

Novosibirsk, Russia

15th GAMM-IMACS International Symposium

on Scientific Computing, Computer Arithmetics

and Verified Numerics

Book of Abstracts

15th GAMM–IMACS International

Symposium on Scientific Computing,

Computer Arithmetics

and Verified Numerics

SCAN’2012

Novosibirsk, Russia

September 23–29, 2012

Book of Abstracts

Institute of Computational Technologies

Novosibirsk, 2012

SCIENTIFIC COMMITTEE

• Gotz Alefeld (Karlsruhe, Germany)

• Jean-Marie Chesneaux (Paris, France)

• George F. Corliss (Milwaukee, USA)

• Tibor Csendes (Szeged, Hungary)

• Andreas Frommer (Wuppertal, Germany)

• R. Baker Kearfott (Lafayette, USA)

• Walter Kramer (Wuppertal, Germany)

• Vladik Kreinovich (El Paso, USA)

• Ulrich Kulisch (Karlsruhe, Germany)

• Wolfram Luther (Duisburg, Germany)

• Svetoslav Markov (Sofia, Bulgaria)

• Gunter Mayer (Rostok, Germany)

• Jean-Michel Muller (Lyon, France)

• Mitsuhiro Nakao (Fukuoka, Japan)

• Michael Plum (Karlsruhe, Germany)

• Nathalie Revol (Lyon, France)

• Jirı Rohn (Prague, Czech Republic)

• Siegfried Rump (Hamburg, Germany)

• Sergey P. Shary (Novosibirsk, Russia)

• Yuri I. Shokin (Novosibirsk, Russia)

• Wolfgang V. Walter (Dresden, Germany)

• Jurgen Wolff von Gudenberg (Wurzburg, Germany)

• Nobito Yamamoto (Tokyo, Japan)

WEB-SITE

http://conf.nsc.ru/scan2012

ORGANIZERS

• Institute of Computational Technologies SD RAS,

http://www.ict.nsc.ru

• Novosibirsk State University,

http://www.nsu.ru

• Novosibirsk State Technical University,

http://www.nstu.ru

• “Scientific Service” Ltd.

SPONSOR

Russian Foundation for Basic Research (RFBR)http://www.rfbr.ru

ORGANIZING COMMITTEE

• Sergey P. Shary [emailprotected]

• Irene A. Sharaya [emailprotected]

• Yuri I. Molorodov [emailprotected]

• Svetlana V. Zubova [emailprotected]

• Andrei V. Yurchenko

• Vladimir A. Detushev

• Dmitri Yu. Lyudvin

[emailprotected]

Preface

This volume contains peer refereed abstracts of the 15th GAMM-IMACSInternational Symposium on Scientific Computing, Computer Arithmetic andVerified Numerical Computations, Novosibirsk, September 23–29, 2012.

This conference continues the series of international SCAN symposia initi-ated by University of Karlsruhe, Germany, and held under the joint auspices ofGAMM and IMACS. SCAN symposia have been held in many cities across theworld:

Karlsruhe, Germany (1988)Basel, Switzerland (1989)Albena-Varna, Bulgaria (1990)Oldenburg, Germany (1991)Vienna, Austria (1993)Wuppertal, Germany (1995)Lyon, France (1997)Budapest, Hungary (1998)Karlsruhe, Germany (2000)Paris, France (2002)Fukuoka, Japan (2004)Duisburg, Germany (2006)El Paso, Texas, USA (2008)Lyon, France (2010)

SCAN’2012 strives to advance the frontiers in verified numerical computations,interval methods, as well as their application to computational engineering andscience. Topics of interest include, but are not limited to:• theory, algorithms, and arithmetics for verified numerical computations• hardware and software support, programming tools for verification• symbolic and algebraic methods for numerical verification• verification in operations research, optimization, and simulation• verified solution of ordinary differential equations• computer-assisted proofs and verification for partial differential equations• interval analysis and its applications• supercomputing and reliability• industrial and scientific applications of verified numerical computations

We want to thank all contributors and participants of symposium SCAN’2012.Without their active participation, we would not have succeeded.

Local Organizers

Contents

Todor AngelovSolvability of systems of interval linear equations via the codifferentialdescent method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Ekaterina Auer and Stefan KielUses of verified methods for solving non-smooth initial value problems . 15

Fayruza BadrtdinovaInterval of uncertainty in the solution of inverse problems of chemicalkinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Mamurjon Bazarov, Laziz Otakulov, Kadir AslonovSoftware package for investigation of dynamic properties of control sys-tems under interval uncertainty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Burova IrinaOn constructing nonpolynomial spline formulas . . . . . . . . . . . . . . . . . . . . . . . . 21

Michal Cerny and Miroslav RadaOn the OLS set in linear regression with interval data . . . . . . . . . . . . . . . . . 23

Alexandre Chapoutot, Laurent-Stephane Didier and Fanny VillersA statistical inference model for the dynamic range of LTI systems . . . . 25

Alexandre Chapoutot and Thibault Hilaire and Philippe ChevrelInterval-based robustness of linear parameterized filters . . . . . . . . . . . . . . . . 27

Chin-Yun ChenAcceleration of the computational convergence of extended interval New-ton method for a special class of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

Chin-Yun ChenNumerical comparison of some verified approaches for approximate inte-gration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Boris S. Dobronets and Olga A. PopovaNumerical probabilistic analysis under aleatory and epistemic uncer-tainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5

Vladimir V. Dombrovskii and Elena V. ChausovaModel predictive control of discrete linear systems with interval andstochastic uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Thomas Dotschel, Andreas Rauh, Ekaterina Auer, and Harald AschemannNumerical verification and experimental validation of sliding mode con-trol design for uncertain thermal SOFC models . . . . . . . . . . . . . . . . . . . . . . . . 37

Vadim S. DronovLimitations of complex interval Gauss-Seidel iterations . . . . . . . . . . . . . . . . 39

Tomas DzetkulicEndpoint and midpoint interval representations – theoretical and com-putational comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Tomas DzetkulicRigorous computation with function enclosures in Chebyshev basis . . . . 43

Pierre Fortin and Mourad Gouicem and Stef GraillatSolving the Table Maker’s Dilemma by reducing divergence on GPU . . . 45

Stepan GatilovEfficient angle summation algorithm for point inclusion test and its ro-bustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Alexander HarinSubinterval analysis. First results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Alexander HarinTheorem on interval character of incomplete knowledge. Subintervalanalysis of incomplete information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Jennifer Harlow, Raazesh Sainudiin and Warwick TuckerArithmetic and algebra of mapped regular pavings . . . . . . . . . . . . . . . . . . . . . 53

Behnam HashemiVerified computation of symmetric solutions to continuous-time algebraicRiccati matrix equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Oliver Heimlich and Marco Nehmeier and Jurgen Wolff von GudenbergComputing interval power functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

6

Oliver Heimlich and Marco Nehmeier and Jurgen Wolff von GudenbergComputing reverse interval power functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Milan HladıkNew directions in interval linear programming . . . . . . . . . . . . . . . . . . . . . . . . . 60

Jaroslav Horacek and Milan HladıkComputing enclosures of overdetermined interval linear systems. . . . . . . . 62

Arnault Ioualalen, Matthieu MartelSardana: an automatic tool for numerical accuracy optimization . . . . . . . 64

Luc JaulinInterval analysis and robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Maksim KarpovUsing interval branch-and-prune algorithm for lightning protection sys-tems design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Masahide KashiwagiAn algorithm to reduce the number of dummy variables in affine arith-metic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Akitoshi Kawamura, Norbert Muller, Carsten Rosnick, Martin ZieglerUniform second-order polynomial-time computable operators and datastructures for real analytic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Ralph Baker KearfottOn rigorous upper bounds to a global optimum . . . . . . . . . . . . . . . . . . . . . . . . 74

Oleg V. KhamisovBounding optimal value function in linear programming under intervaluncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

Stefan Kiel, Ekaterina Auer, and Andreas RauhAn environment for verified modeling and simulation of solid oxide fuelcells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Olga Kosheleva and Vladik KreinovichUse of Grothendieck’s inequality in interval computations: quadraticterms are estimated accurately modulo a constant factor . . . . . . . . . . . . . . . 79

7

Elena K. KostousovaOn boundedness and unboundedness of polyhedral estimates for reach-able sets of linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

Walter KramerArbitrary precision real interval and complex interval computations . . . . 83

Vladik KreinovichDecision making under interval uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

Bart lomiej Jacek KubicaExcluding regions using Sobol sequences in an interval branch-and-boundmethod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

Bart lomiej Jacek Kubica and Adam WozniakInterval methods for computing various refinements of Nash equilibria . 88

Sergey I. Kumkov and Yuliya V. MikushinaInterval approach to identification of parameters of experimental processmodel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

Olga Kupriianova and Christoph LauterThe libieee754 compliance library for the IEEE 754-2008 standard . . . 93

Boris I. KvasovMonotone and convex interpolation by weighted quadratic splines. . . . . . 95

Anatoly V. LakeyevOn unboundedness of generalized solution sets for interval linear systems 97

Christoph Lauter and Valerie Menissier-MorainThere’s no reliable computing without reliable access to rounding modes 99

Xuefeng Liu and Shin’ichi OishiA framework of high precision eigenvalue estimation for selfadjoint ellip-tic differential operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

Dmitry Yu. Lyudvin, Sergey P. SharyComparisons of implementations of Rohn modification in PPS-methodsfor interval linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

8

Shinya MiyajimaComponentwise inclusion for solutions in least squares problems and un-derdetermined systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Shinya MiyajimaVerified computations for all generalized singular values . . . . . . . . . . . . . . . 107

Yurii MolorodovInformation support of scientific symposia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Sethy Montan, Jean-Marie Chesneaux, Christophe Denis, Jean-Luc LamotteTowards an efficient implementation of CADNA in the BLAS: exampleof the routine DgemmCADNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Yusuke Morikura, Katsuhisa Ozaki and Shin’ichi OishiVerification methods for linear systems on a GPU . . . . . . . . . . . . . . . . . . . . . 113

Christophe Mouilleron, Amine Najahi, Guillaume RevyApproach based on instruction selection for fast and certified code gen-eration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Dmitry Nadezhin and Sergei ZhilinJInterval library: principles, development, and perspectives . . . . . . . . . . . . 117

Markus NeherVerified integration of ODEs with Taylor models. . . . . . . . . . . . . . . . . . . . . . . 119

Sergey I. NoskovSearching solutions to the interval multi-criteria linear programming prob-lem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Takeshi OgitaVerified solutions of sparse linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

Tomoaki OkayamaError estimates with explicit constants for Sinc quadrature and Sinc in-definite integration over infinite intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Nikolay Oskorbin and Sergei ZhilinOn methodological foundations of interval analysis of empirical depen-dencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

9

Katsuhisa Ozaki and Takeshi OgitaPerformance comparison of accurate matrix multiplication . . . . . . . . . . . . . 129

Valentin N. PanovskiyInterval methods for global unconstrained optimization: a software pack-age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

Anatoly V. PanyukovApplication of redundant positional notations for increasing arithmeticalgorithms scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Anatoly V. Panyukov and Valentin A. GolodovComputing the best possible pseudo-solutions to interval linear systemsof equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

Evgenija D. PopovaProperties and estimations of parametric AE-solution sets . . . . . . . . . . . . . 136

Alexander ProlubnikovAn interval approach to recognition of numerical matrices . . . . . . . . . . . . . 138

Maxim I. Pushkarev, Sergey A. GaivoronskyMaximizing stability degree of interval systems using coefficient method .140

Andreas Rauh, Ekaterina Auer, Ramona Westphal, and Harald AschemannExponential enclosure techniques for the computation of guaranteed stateenclosures in ValEncIA-IVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

Andreas Rauh, Luise Senkel, Thomas Dotschel, Julia Kersten, and HaraldAschemannInterval methods for model-predictive control and sensitivity-based stateestimation of solid oxide fuel cell systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

Alexander Reshetnyak, Andrei Kuleshov and Vladimir StarichkovOn computer-aided proof of the correctness of non-polynomial oscillatorrealization of the generalized Verma module for non-linear superalgebras .146

Siegfried M. RumpInterval arithmetic over finitely many endpoints . . . . . . . . . . . . . . . . . . . . . . . 148

10

Gennady G. Ryabov, Vladimir A. SerovThe bijective coding in the constructive world of Rn

c . . . . . . . . . . . . . . . . . . . 149

Ilshat R. Salakhov, Olga G. KantorEstimation of model parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

Pavel SaraevInterval pseudo-inverse matrices: computation and applications . . . . . . . . 153

Alexander O. SavchenkoCalculation of potential and attraction force of an ellipsoid . . . . . . . . . . . . 155

Kouta Sekine, Akitoshi Takayasu and Shin’ichi OishiA numerical verification method for solutions to systems of elliptic partialdifferential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

Konstantin K. Semenov, Gennady N. Solopchenko, and Vladik KreinovichProcessing measurement uncertainty: from intervals and p-boxes to prob-abilistic nested intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

Yaroslav D. SergeyevDeterministic global optimization using the Lipschitz condition . . . . . . . . 160

Yaroslav D. SergeyevThe Infinity Computer and numerical computations with infinite andinfinitesimal numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

Christian Servin, Craig Tweedie, and Aaron VelascoTowards a more realistic treatment of uncertainty in Earth and envi-ronmental sciences: beyond a simplified subdivision into interval andrandom components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

Irene A. SharayaBoundary intervals and visualization of AE-solution sets for interval sys-tem of linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

Sergey P. Shary, Nikita V. PanovRandomized interval methods for global optimization . . . . . . . . . . . . . . . . . . 168

Nikolay V. ShilovVerified templates for design of combinatorial algorithms . . . . . . . . . . . . . . 170

11

Semen SpivakInformativity of experiments and uncertainty regions of model parame-ters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

Semen I. Spivak and Albina S. IsmagilovaAnalysis of non-uniqueness of the solution of inverse problems in thepresence of measurements errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

Semen I. Spivak, Olga G. KantorInterval estimation of system dynamics model parameters. . . . . . . . . . . . . . 176

Irina Surodina and Ilya LabutinAlgorithm for sparse approximate inverse preconditioners refinement inconjugate gradient method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

Akitoshi Takayasu and Shin’ichi OishiComputer-assisted error analysis for second-order elliptic equations indivergence form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

Lev S. Terekhov and Andrey A. LavrukhinOn affinity of physical processes of computing and measurements . . . . . . 182

Laurent Thevenoux, Matthieu Martel and Philippe LangloisAutomatic code transformation to optimize accuracy and speed in floating-point arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

Philippe Theveny and Nathalie RevolInterval matrix multiplication on parallel architectures . . . . . . . . . . . . . . . . . 186

Naoya Yamanaka and Shin’ichi OishiFast infimum-supremum interval operations for double-double arithmeticin rounding-to-nearest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

Ziyavidin Yuldashev, Alimzhan Ibragimov, Shukhrat TadjibaevInterval polynomial interpolation for bounded-error data. . . . . . . . . . . . . . . 190

Sergei ZhilinANOVA, ANCOVA and time trends modeling: solving statistical prob-lems using interval analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

Vladimir Zhitnikov, Nataliya Sherykhalina and Sergey PorechnyRepeated filtration of numerical results for reliable error estimation . . . . 194

12

Solvability of systems

of interval linear equations

via the codifferential descent method

Todor Angelov

Saint-Petersburg State University35, Universitetskii prospekt

198504 Saint-Petersburg, [emailprotected]

Keywords: linear interval equations, solvability, nonsmooth analysis,codifferential calculus

A system of linear interval equations

Ax = b (1)

is considered in the works [1-3]. Here A = (aij) is an interval m × n-matrix,and b = (bi) is an interval m-vector.

We need the following definitions: a = [a,a] = x ∈ R | a ≤ x ≤ a,mida = 1

2 (a + a), rada = 12 (a − a), and 〈a〉 = max0, a, −a.

By the (weak) solution set to a system of linear interval equations (1), wemean the set

Ξ(A, b) = x ∈ Rn |Ax = b for some A ∈ A, b ∈ b ,

constructed of all possible solutions of the systems Ax = b with A ∈ A andb ∈ b [2,3].

Statement [1]. The expression

Uni(x,A, b) = min1≤i≤m

radbi −⟨

mid bi −n∑

j=1

aijxj

⟩

defines the functional Uni : Rn → R, such that the membership of a vectorx ∈ R

n in the solution set Ξ(A, b) of the system of linear interval equationsAx = b is equivalent to nonnegativity of the functional Uni in x,

x ∈ Ξ(A, b) ⇐⇒ Uni(x,A, b) ≥ 0.

13

Consider a locally Lipschitz function f defined on an open set X ⊂ Rn.Definition [4]. A function f : X → R is called codifferentiable at a point

x ∈ X if there exist compact convex sets df(x) ⊂ Rn+1 and df(x) ⊂ Rn+1 suchthat the following expansion holds

f(x+ ∆) = f(x) + max[a,v]∈df(x)

a+ (v,∆) + min[b,w]∈df(x)

b+ (w,∆) + o(x,∆),

whereo(x,∆)

‖∆‖ −→ 0 as ‖∆‖ → 0, a, b ∈ R, v, w ∈ Rn.

The pair Df(x) = [df(x), df(x)] is called the codifferential of f at x. Afunction f is called continuously codifferentiable at a point x ∈ X if it is cod-ifferentiable in a neighborhood of x and if there exists a codifferential mappingDf which is Hausdorff continuous at the point x.

It turns out that most known nonsmooth functions, as well as the func-tional Uni, are continuously codifferentiable. The codifferential mapping hasthe property to identify sets of points of nondifferentiability. Note that Uni ismulti-extremal, and its graph is constructed of a finite number of hyperplanes.In general, the local minima points of −Uni lie on intersections of these hy-perplanes, which appear to be sets of nondifferentiability of −Uni. This allowsthe codifferential descent method [4] to reach the local minima points of −Uniin one or a small amount of iterations. Also, the proposed method has theproperty to “jump out” of local minima points and descent further.

In comparison, Shary in his paper [1] proposes a solution to the optimizationof Uni, based on the fact that Uni is concave in every orthant of Rn. Therefore,the localizations of Uni can be studied by means of tools of convex analysis.

References:

[1] S.P. Shary, Solvability of interval linear equations and data analysisunder uncertainty, Automation and Remote Control, 73 (2012), No. 2,pp. 310–322.

[2] S.P. Shary, Finite-dimensional Interval Analysis, Novosibirsk, 2011. Elec-tronic book, accessible at http://www.nsc.ru/interval/Library/InteBooks(in Russian)

[3] A. Neumaier, Interval Methods for Systems of Equations, CambridgeUniversity Press, Cambridge, 1990.

[4] V.F. Demyanov, A.M. Rubinov, Constructive Nonsmooth Analysis,Verlag Peter Lang, Frankfurt-am-Main, 1995.

14

Uses of verified methods for solving

non-smooth initial value problems

Ekaterina Auer and Stefan Kiel

Faculty of Engineering, INKOUniversity of Duisburg-EssenD-47048 Duisburg, Germany

auer, [emailprotected]

Keywords: IVP, ODE, non-smooth systems, interval methods

Many system types from the area of engineering require mathematical mod-els involving non-differentiable or discontinuous functions [1]. The non-smooth-ness can be obvious, such as that in commonly used models for friction orcontact. There are also more obscure cases occurring, for example, in computer-based simulations where if-then-else or similar conditions are used on modelvariables. The task of finding reliable solutions becomes especially difficult ifnon-smooth functions appear on the right side of an initial value problem (IVP).On the one hand, such system models are often sensitive to round-off errors.On the other hand, their parameters might be uncertain due to imprecisenessin measurements or lack of knowledge. Therefore, interval methods represent astraightforward choice for verified analysis of such systems. They guarantee thecorrectness of results obtained on a computer and can represent purely epistemicbounded uncertainty in a natural way.

However, the application of the existing interval methods to real-life sce-narios is challenging since they might provide overly conservative enclosures ofexact solutions. Even in the case of simple jump discontinuities, where thesolution is not differentiable in just several switching points, the accuracy ispoor and, consequently, the resulting enclosures might be too wide [4]. This isprobably the reason for the relatively little attention the non-smooth problemshave got in the last decades whereas verified solution of smooth IVPs has beenextensively explored. For example, there exists no publicly available verifiedimplementation of a non-smooth IVP solver at the moment to our knowledge.Nonetheless, meaningful outcomes can still be obtained as is demonstrated inthis talk for several examples.

In our contribution, we identify important types of non-smooth applicationalong with their corresponding solution definitions first. Second, we provide anoverview of the existing techniques for verified enclosure of exact solutions to

15

non-smooth IVPs [2,3,4] and assign a suitable solution method to each of theapplication types mentioned above. After that, we focus our considerations on aspecial case in which the switching points are known a priori in a certain sense.For this situation, we describe a simple method to solve non-smooth IVPs usingbasically the same techniques as in the smooth case. Finally, we demonstratethe applicability of the method using several examples.

References:

[1] A. Filippov, Differential Equations With Discontinuous Righthand Sides,Kluwer Academic Publishers, 1988.

[2] N. Nedialkov and M. von Mohrenschildt, Rigorous Simulation ofHybrid Dynamic Systems with Symbolic and Interval Methods, In Pro-ceedings of the American Control Conference Anchorage, 2002.

[3] A. Rauh, C. Siebert, and H. Aschemann, Verified Simulation andOptimization of Dynamic Systems with Friction and Hysteresis, In Pro-ceedings of ENOC 2011, Rome, Italy, July 2011.

[4] R. Rihm, Enclosing solutions with switching points in ordinary differentialequations, In Computer arithmetic and enclosure methods. Proceedings ofSCAN 91, Amsterdam: North-Holland, 1992, pp. 419–425.

16

Interval of uncertainty in the solution

of inverse problems of chemical kinetics

Fayruza Badrtdinova

Birsk State Socially-pedagogical Academy,International, 10, 452453, Birsk, Russia

[emailprotected]

Keywords: interval of uncertainty, chemical kinetics

The inverse problem of chemical kinetics is a problem of identifying reactionsand the rate constants, as well as the other kinetic parameters associated withthese reactions. Solving this problem is often obstructed with ambiguity uponthe estimation of specific kinetic parameters. Such an ambiguity reflects thenature of a kinetic model that describes only some features of chemical reactionsin a certain area of the reaction. In the inverse problem of chemical kinetics,being a problem of identifying reaction factors, it is necessary to evaluate theuncertainty limits for the kinetic parameter estimates. For this purpose, wesuggest using a method that is based on the Kantorovich idea [1], when onlyknowledge of maximal experimental errors is used. Each measured value isconsidered to be an interval [k] that is a set of all possible values k bounded byinequalities k− ≤ k ≤ k+ [2]. Under this assumption, each kinetic parametercan be estimated by a region, whose every point is a result of a numericalsimulation of the reaction. Considering all these regions together we obtain amultidimensional area that consists of the points that represent a valid set ofthe kinetic parameters.

The parameters of chemical kinetics k are found from the differential equa-tions by solving the inverse problem. Depending on the type of the experiment,the system of differential equations of chemical kinetics has different forms:

1) non-steady state experiment dx/dt = f1(x, y, k), dy/dt = f2(x, y, k),2) quasi-steady state experiment dx/dt = f1(x, y, k), f2(x, y, k) = 0,3) equilibrium f1(x; y; k) = 0, f2(x; y; k) = 0,

where x is the vector of the measurable compounds; y is the vector of thecompounds that cannot be measured.

There are but a few methods for determining uncertainty ranges. For ex-ample, the direct search method, which is the simplest but the slowest. Itsdrawback is that the function to be minimized has to be calculated many times.

17

In this study, we consider a method based on L.V. Kantorovich’s idea. The fol-lowing problem is set: for each constant, the uncertainty range (more exactly,its boundaries) has to be found. To find the range for the constant kj , we needto determine min kj and max kj under the condition that the restrictions

|xexp − xcalc| ≤ ε (1)

are satisfied.During the search, the direct kinetic problem is solved with a certain set

of constants, simultaneously checking that the concentrations found satisfy theinequality (1). If the concentration values computed satisfy the inequality, thenthe given set of constants belongs to the desired uncertainty range. To find aboundary of the desired region dj , a certain set of constants has to be taken,all constants being fixed except one, for example kj . The set of constants isdetermined from a solution of the inverse problem.

The following algorithm for finding the uncertainty range by constant kj isconsidered.

Some value of the constant that satisfies (1) is considered as the initialapproximation. Such a value can be found by minimization of a criterion thattakes into account the discrepancy between calculations and measurements. Letus assume that an initial point (k01 , ..., k

0m) is found and an initial step h0 is

chosen. To find the desired range for the j-th constant, we determine max kj(the algorithm for finding min kj is the same, but the step must be taken witha ‘minus’ sign. By adding the h0 step to k0j , we obtain the following set of

constants: (k01 , ..., k0j + h0, ..., k

0m). Now we solve the direct problem with the

available set of constants and check the consistency of inequality (1). If theinequality is satisfied, the point k0j + h0 belongs to the desired range, and wecontinue to move to the right if we are searching for max kj . If inequality (9) isnot satisfied at the point k0j + h0, we decrease the step twofold, h1 = h0

2 , and

add it to k0j to obtain a new set of constants (k01 , ..., k0j + h1, ..., k

0m). We solve

the direct problem with the resulting set of constants and check the consistencyof inequality (1). The process is repeated until the step with the requiredaccuracy is obtained. In such a way, the boundaries of the uncertainty rangeare determined. A similar search procedure is used for the other rate constants.

References:

[1] L.V. Kantorovich, On some new approaches to computational meth-ods and observation processing, Siberian Mathematical Journal, 3 (1962),No. 5, pp. 701-709. (in Russian)

18

[2] A.I. Khlebnikov, On the uncertainty center method, Journal of Analyt-ical Chemistry, 51 (1996), No. 3, pp. 321-322.

Software package for investigation of

dynamic properties of control systems

under interval uncertainty

Mamurjon Bazarov, Laziz Otakulov, Kadir Aslonov

210100, Navoi state mining institute, Navoi, [emailprotected]

Keywords: program system, interval uncertainty, parametrical identifica-tion, control systems of technological objects

Algorithmization of methods of interval analysis encounters essential diffi-culties caused by the fact that the existing computer hardware and softwaredo not completely match specific requirements of interval computations. Wemean specific computer interval arithmetic with directed rounding, evaluationof interval functions and some analytical transformations of the expressions.

Experts engaged in design of automatic control systems deal with mathemat-ical models in the form of differential equations systems and/or block diagramswith transfer functions. They usually involve one or more parameters of theobject and its regulator. Such mathematical models are known to be calledas “parametric”. In the program system INTAN-1 (INTerval ANalysis –1) de-veloped by our team, automatic control systems under interval uncertainty ofparameters can be analyzed providing that, on entry, data and constraints areconsidered precisely known, while the values of parameters of the automaticcontrol systems have interval uncertainty.

The program system INTAN-1 consists, basically, of three main parts thatare responsible for

• identification of the control objects with interval parameters,

• the analysis of automatic control systems under interval uncertainty,

• computing (interval) parameters of the regulator.

19

All the other blocks of the system are either auxiliary or utility programs. Thestructure of INTAN-1 can be represented in a vivid way through a block dia-gram.

At the start of the program system run, the block “Head program” carriesout editing of the input data and checking their correctness. If an error in theinput record is detected, then a corresponding diagnostic message in outputted.In case of success, further performance is launched, that is, forming input andtarget data, processing the current stage and analyzing the result. Then thename of the next program unit is determined, it is loaded into the memory, thesystem directs the control to it, and informs the user about the details of theprogram execution.

The block “Program toolkit” consists of the solvers that perform the mainwork during the solution of the problem. The short description of these blocksis given below.

The block “Auxiliary programs” includes the computational procedures fortesting regularity of interval matrices, testing positive definiteness of intervalmatrices, etc. These are necessary for the analysis of automatic control systemsunder interval uncertainty.

The block “Algebraic equations solvers” includes Matlab R© implementa-tions of interval Gauss-Seidel method, subdifferential Newton method as well assome other popular techniques for the solution of various problems that arise inconnection with linear and nonlinear algebraic equations.

The basic operations used in the analysis of automatic control systems underinterval uncertainty are implemented in the block “Analysis of interval auto-matic control systems”.

The end-user, during the work with our program system, forms the initialinformation on the problem under solution. Then the system analyzes the in-formation, constructs an interval model of the problem that supplements theinput information, carries out analytical transformations (if necessary), per-forms interval expansions of the expressions, computes interval extensions ofthe functions, and, finally, produces a solution to the problem.

20

On constructingnonpolynomial spline formulas

Irina Burova

Mathematics and Mechanics Faculty, St. Petersburg State UniversityUniversitetsky prospekt, 28, 198504, Peterhof, St. Petersburg, Russia

[emailprotected]

Keywords: spline

Let m, l, s, n, p be integer nonnegative numbers, l ≥ 1, s ≥ 1, p ≤ s + l −2, m = s + l, xk be a mesh of ordered nodes, a = x0 < . . . < xk−1 <xk < xk+1 . . . < xn = b, and the function u ∈ Cm[a, b]. We suppose thatϕj , j = 1, . . . ,m, is a Chebyshev system on [a, b], in which case the functionsϕj ∈ Cm[a, b], j = 1, . . . ,m, are strictly monotone and nonzero within [a, b].The basic functions ωj(x), for which supp ωj = [xj−s, xj+l], j = 1, . . . ,m, areassumed to be valid, can be defined from the system of equations

j+s∑

k=j−l+1

ωk(x)ϕi(xk) =

m∑

k=1

cikϕk(x), i = 1, 2 . . . ,m,

where cik = 0 if i 6= k and ckk = 1. Next, we use ωj(x) in the Lagrangeinterpolation problem or in the least squares problem. If we take cik 6= 0, then itis possible to construct nonpolynomial basic splines with required characteristics(e.g., smoothness).

For example, let us discuss how to construct trigonometrical basic splines ofthe minimal defect (ωj ∈ C1[a, b]) with three mesh intervals in support.

Let supp ωj = [xj−1, xj+2]. If x ∈ [xj , xj+1], then we find ωj(x) from thefollowing system:

ωj−1(x) + ωj(x) + ωj+1(x) = 1,

sin(xj−1)ωj−1(x) + sin(xj)ωj(x) + sin(xj+1)ωj+1(x) = c10 sin(x) + c01 cos(x),

cos(xj−1)ωj−1(x) + cos(xj)ωj(x) + cos(xj+1)ωj+1(x) = c02 sin(x) + c20 cos(x).

If [xj−1, xj ], then we find ωj(x) from the system

ωj−2(x) + ωj−1(x) + ωj(x) = 1,

sin(xj−2)ωj−2(x) + sin(xj−1)ωj−1(x) + sin(xj)ωj(x) = c10 sin(x) + c01 cos(x),

cos(xj−2)ωj−2(x) + cos(xj−1)ωj−1(x) + cos(xj)ωj(x) = c02 sin(x) + c20 cos(x),

21

and if [xj+1, xj+2), then we find ωj(x) from the system

ωj(x) + ωj+1(x) + ωj+2(x) = 1,

sin(xj)ωj(x) + sin(xj+1)ωj+1(x) + sin(xj+2)ωj+2(x) = c10 sin(x) + c01 cos(x),

cos(xj)ωj(x) + cos(xj+1)ωj+1(x) + cos(xj+2)ωj+2(x) = c02 sin(x) + c20 cos(x).

We find the values of the parameters c01, c10, c02, c20 from ωj ∈ C1(R1), thusobtaining c02 = −c01 = cos(h/2) sin(h/2), c10 = c20 = cos2(h/2), h = (b− a)/n.Then, on [xj , xj+1], we have ωj−1(x) = (cos(x− jh− h)− 1)/(2(cos(h)− 1)),

ωj(x) =cos(h)− cos(x − jh− h/2) cos(h/2)

cos(h)− 1, ωj+1(x) =

cos(x− jh)− 1

2(cos(h)− 1).

Hence,

ωj(x) =

cos(x−jh+h)−12(cos(h)−1) , x ∈ [xj−1, xj),

cos(h)−cos(x−jh−h/2) cos(h/2)cos(h)−1 , x ∈ [xj , xj+1),

cos(x−jh−2h)−12(cos(h)−1) , x ∈ [xj+1, xj+2].

If ϕj are 1, sin(kx), cos(kx), k = 1, 2 (ωj ∈ C2) or k = 1, 2, 3 (ωj ∈ C3),then the problem is more complex, but, nevertheless, it can be easily solvedusing Maple

TM

[1, 2].Suppose that we are interested in the value of a physical quantity u(x) that

is difficult or impossible to measure directly. To find the value of u(x), severalother quantities u(x0)+K1(h)u′(x0), . . . , u(xn)+K1(h)u′(xn), K1(h) = tg(h/2)are measured, and then we reconstruct the value of u(x) ≈ u(x):

u(x) =∑

k=j−1,j,j+1

(u(xk) +K1(h)u′(xk))ωk(x), x ∈ [xj , xj+1].

We take the values Xk = [xk − dk, xk + dk], X = [xl, xp], x ∈ X , Uk =[uk − tk, uk + tk], U ′

k = [u′k − sk, u′k + sk], dk > 0, tk > 0, sk > 0 and estimateu(X) using interval computations ([3]).

References:

[1] I.G. Burova, About trigonometric splines construction. Vestnik St. Pe-tersburg University: Mathematics. Series I, 2 (2004), pp. 7–15.

[2] I.G. Burova, T.O. Evdokimova, About smooth trigonometric splinesof the third order, Vestnik St. Petersburg University: Mathematics. SeriesI, 4 (2004), pp. 12–23.

22

[3] G. Alefeld, J. Herzberger, Introduction to Interval Computations,Academic press, Tokyo, Toronto, 1983.

On the OLS set in linear regression with

interval data

Michal Cerny and Miroslav Rada

Department of Econometrics, University of Economics, PragueWinston Churchill Square 4

13067 Prague, Czech [emailprotected], [emailprotected]

Keywords: possibilistic interval regression, OLS set, zonotope

Consider the linear regression model y = Xβ+ε, where y denotes the vectorof (observations of) output data, β denotes the vector of regression parameters,X denotes the matrix of (observations of) input data, which is assumed to havefull column rank, and ε denotes the vector of disturbances. Assume that y, thevector of observations of the output variable, ranges over an interval vector y.Using well-known ordinary least squares (OLS) estimator, we define the OLSset as (X ′X)−1X ′y : y ∈ y. The OLS set consists of all OLS-estimatesof regression parameters of the regression model as the vector of observationsranges over y.

For a user of the regression model it is essential to have a suitable descriptionof the OLS set.

The OLS set is a zonotope in the parameter space. We present a methodfor construction of vertex description of the OLS set, inequality description ofthe OLS set and computation of volume of the OLS set. The method, called“Reduction-and-Reconstruction-Recursion”, is a uniform approach to the threeproblems. While it runs in exponential time in the general case (which is notsurprising as the computation of volume of a zonotope is a #P -hard problem∗),in a fixed dimension (= number of regression parameters) it is a polynomial-time method. We further discuss complexity-theoretic properties of the method

∗Unlike the class NP of the decision problems, the problem class #P contains the functionor precisely counting problems associated with problems in NP .

23

in the general case and compare it with other known methods for enumerationof facets, enumeration of vertices and computation of volume of a zonotope.

In general, the OLS set is a polytope, which is complex from the com-binatorial point of view (i.e., with respect to the number of facets and ver-tices). Hence it makes sense to seek for reasonably simple approximations.Using interval arithmetic, construction of the interval enclosure is trivial. Weshow a method for finding an ellipsoidal approximation of the Lowner-Johntype. We present an adaptation of Goffin’s Algorithm (a version of the shallow-cut ellipsoid method) for construction of an ellipsoidal enclosure. In partic-ular, given ǫ > 0 fixed, in polynomial time we construct an ellipse E(E, s)such that E(d−2 · E, s) ⊆ OLS ⊆ E((1 + ǫ)E, s), where E(E, s) is the ellipsex : (x − s)′E−1(x − s) ≤ 1 with E is positive definite, OLS is the OLS setand d is dimension (number of regression parameters).

References:

[1] D. Avis, K. Fukuda, Reverse search for enumeration, Discrete AppliedMathematics, 65 (1996) 21–46.

[2] J. L. Goffin, Variable metric relaxation methods. Part II: The ellipsoidmethod, Mathematical Programming, 30 (1984) 147–162.

[3] M. Grotschel, L. Lovasz, A. Schrijver, Geometric Algorithms andCombinatorial Optimization, Springer, Germany, 1993.

[4] L. J. Guibas, A. Nguyen, L. Zhang, Zonotopes as Bounding Volumes,Proceeding SODA ’03 Proceedings of the Fourteenth Annual ACM-SIAMSymposium on Discrete Algorithms, SIAM, Pennsylvania, 2003.

[5] H. Kashima, K. Yamasaki, A. Inokuchi, H. Saigo, Regression withinterval output values, 19th International Conference on Pattern Recog-nition ICPR 2008, Tampa, USA, 2008, pp. 1–4.

[6] S. Schon, H. Kutterer, Using zonotopes for overestimation-free inter-val least-squares — some geodetic applications, Reliable Computing, 11(2005), 137–155.

[7] H. Tanaka, J. Watada, Possibilistic linear systems and their applicationto the linear regression model, Fuzzy Sets and Systems, 27 (1988) 275–289.

[8] G. Ziegler, Lectures on Polytopes, Springer, Germany, 2004.

24

A statistical inference model forthe dynamic range of LTI systems

Alexandre Chapoutot, Laurent-Stephane Didier and Fanny Villers

ENSTA ParisTech – 32 bd Victor, 75739 Paris Cedex 15, FranceUPMC-LIP6 – 4, place Jussieu, 75252 Paris Cedex 05, France

UPMC-LPMA – 4, place Jussieu, 75252 Paris Cedex 05, [emailprotected]

[emailprotected]

[emailprotected]

Keywords: range estimation, linear time invariant filters, extreme valuetheory, time series

Control-command and signal filtering algorithms are two main componentsof embedded software. These algorithms are usually described by linear time-invariant (LTI) systems which have good properties and are well understoodmathematically. In automotive domain, in order to increase performance of theimplementation of such algorithms, e.g., to reduce execution time or memoryconsumption, the use of fixed-point arithmetic is almost unavoidable. Nev-ertheless at the design level, these algorithms are studied and defined usingfloating-point arithmetic. As the two arithmetics have very different behaviors,we need tools to transform with strong guaranties floating-point programs intonumerically equivalent programs using fixed-point arithmetic. This conversionrequires two steps. The range estimation deals with the integer part of the tar-geted fixed point representation while the accuracy estimation allows to definethe fractional part. In this work we are considering range estimation methods.

The range estimation of LTI systems is an important research field in whichtwo kinds of methods exist. The static methods based on interval [5] or affine[6] arithmetics and the dynamic methods based on statistical tools. This workis focused on the second kind of methods. In both cases, the first step in thefixed-point conversion is the computation of the dynamic range of each variablein the program which is a mandatory information to determine the fixed-pointformat. A few statistical models exist for this task, e.g., the previous work[1,3,4]. In particular, the Generalized Extreme Value (GEV) Distribution [2],used in [4] and in a restricted form in [3], seems very promising as it can beused to infer minimal and maximal values of each variable in function of a userparameter. It defines the probability that these values may be exceeded duringthe execution of the program.

25

The use of the GEV distribution shows good results in practice, especiallyfor LTI systems. In this approach, several simulations of the studied systemsare performed using random input. For each simulation the maximum is kept.Because dealing with minimum is similar, we focus our study on the maxima.They appear to belong to a GEV distribution. However, in this model, it isnot taken into account that each simulation is producing a time series. In thiswork we show that data produced by LTI systems can be modelized trough anautoregressive model (AR). This property can be used in order to show that thedistribution of inner variables maxima of LTI systems is a GEV distribution.

References:

[1] W. Sung and K.-I. Kum, Simulation-base word-length optimizationmethod for fixed-point digital signal processing systems, IEEE Transac-tions on Signal Processing, 43 (1995), No. 12, pp. 3087–3090.

[2] S. Coles, An Introduction to Statistical Modeling of Extreme Values,Springer, 2001.

[3] E. Ozer, A. P. Nisbet, and D. Gregg, A stochastic bitwidth esti-mation technique for compact and low-power custom processors, ACMTransaction on Embedded Computing Systems, 7 (2008), No. 3, pp. 1–30.

[4] A. Chapoutot, L.-S. Didier, and F. Villers, Range estimation offloating-point variable in Simulink models, In Numerical Software Verifi-cation (NSV-II), 2009.

[5] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to IntervalAnalysis, SIAM, Philadelphia, 2009.

[6] L. de Figueiredo and J. Stolfi, Self-validated numerical methods andapplications, In Brazilian Mathematics Colloquium Monograph, 1997.

26

Interval-based robustnessof linear parameterized filters

Alexandre Chapoutot and Thibault Hilaire and Philippe Chevrel

ENSTA ParisTech – 32 bd Victor, 75739 Paris Cedex 15, FranceLIP6 – 4, place Jussieu, 75252 Paris Cedex 05, France

IRCCyN – 1, rue de la Noe, BP 92 101, 44321 Nantes Cedex 3, [emailprotected]

[emailprotected]

[emailprotected]

Keywords: linear filters, interval arithmetic, sensitivity analysis

Introduction. This article deals with the resilient implementation of parame-trized linear filters (or controllers), i.e. with realizations that are robust withrespect to their fixed-point implementation.The implementation of a linear filter/controller in an embedded device is a dif-ficult task due to numerical deteriorations in performances and characteristics.These degradations come from the quantization of the embedded coefficientsand the roundoff occurring during the computations.

As mentioned in [1], there are an infinity of equivalent possible algorithms toimplement a given transfer function h. To cite a few of them, one can use directforms, state-space realizations, ρ-realizations, etc. Although they do not requirethe same amount of computation, all these realizations are equivalent in infiniteprecision, but they are no more in finite precision. The optimal realizationproblem is then to find, for a given filter, the most resilient realization.

We here consider an extended problem with filters those coefficients dependon a set θ of parameters that are not exactly known during the design. Theyare used for example in automotive control, where a very late fine tuning isrequired.

Linear parametrized filters. Following [3], we denote Z(θ) the matrix con-taining all the coefficients used by the realization, hZ(θ) the associated transfer

function and θ† the quantized version of θ. Z†(θ†) is then the set of thequantized coefficients, i.e. the quantization of coefficients Z(θ†) computedfrom the quantized parameters θ†. The corresponding transfer function is de-noted hZ†(θ†).

27

Performance Degradation Analysis. The two main objectives of this ar-ticle are to evaluate the impact of the quantization of θ and Z(θ) on the filterperformance and to estimate the parameters θ that give the worst transfer func-tion error in the set of possible parameters Θ.

For that purpose, there are mainly two kinds of tools to study the degrada-tion of filter performance due to the quantization effect: i) use a sensitivity mea-sure (with respect to the coefficients) based on a first order approximation anda statistical quantification error model; ii) use interval tool, based on transferfunction with interval coefficients. In both cases, we seek the maximal distancebetween the exact transfer function hZ(θ) and the quantized one hZ†(θ†). For

that purpose, we can use the L2-norm i.e., ‖ g ‖2,√

12π

∫ 2π

0| g(ejω) |2 dω or the

Maximum norm i.e., ‖ g ‖∞, maxω∈[0,2π] | g(ejω) |.

The measure of the degradation of the finite precision implementation is thengiven by ‖ hZ(θ) − hZ†(θ†) ‖⋄, with ⋄ ∈ 2,∞. So the worst-case parameters

θ0 can be found by solving:

argmaxθ∈Θ

‖ hZ(θ) − hZ

†(θ†) ‖⋄ . (2)

Since Θ is an interval vector, we denote [h] the interval transfer function.With an interval approach, we can define the following constrained global opti-mization problem:

Maximize ‖ [h]†Z†(θ†)

− [h]Z(θ) ‖⋄ subject to θ ∈ Θ . (3)

Note that in both cases, the evaluation of the norms can be done in intervalwith ω ∈ [0, 2π].

We will present the solutions of this problem using interval optimizationmethods [2] and we will compare them with the statistical sensitivity approach.

References:

[1] H. Hanselmann, Implementation of digital controllers – a survey, Auto-matica, 23 (1987), No. 1, pp. 7–32.

[2] E.R. Hansen and G.W. Walster, Global Optimization Using IntervalAnalysis, Pure and Applied Mathematics, Marcel Dekker, 2004.

[3] T. Hilaire, P. Chevrel, and J.F. Whidborne, A unifying frameworkfor finite wordlength realizations, IEEE Trans. on Circuits and Systems,54 (2007), No. 8, pp. 1765–1774.

28

Acceleration of the computational

convergence of extended interval

Newton method for a special class

of functions

Chin-Yun Chen

Department of Applied Mathematics, NCYU,No 300, University Rd,Chiayi 600, [emailprotected]

Keywords: interval arithmetic, enclosure methods, verified bounds, ex-tended interval Newton method, monosplines, Peano kernels

Interval Newton method (cf. [1,2,3]) is a practical tool for enclosing a uniquesimple zero x∗ ∈ [x] of a smooth function f ∈ C1[x] in an interval domain [x]such that the width of the enclosure [x∗] satisfies a given error tolerance. In thiscase, the interval Newton method has a quadratic convergence.

In case of existing more zeros or a multiple zero in the given domain [x],interval Newton method can be extended to enclose all the zeros according to arequested resolution (cf. [2,3]). The extension is based on the extended intervaldivision, namely, the division by an interval that contains 0. The extendedinterval Newton method (XIN) has a linear convergence and its performancedepends on the chosen definition for the underlying interval division, cf. [4].An effective algorithm for XIN that is based on the precise quotient set [5] issuggested in [4]. It has superior effectiveness when the midpoint of [x] happensto be a zero of f and f is not too flat in a neighborhood of the midpoint mid([x]).If f(mid([x])) = 0 and f is flat in a neighborhood of mid([x]) then the algorithmin [4] could be superior in efficiency. In the other cases, it is comparable to thealgorithms for XIN that are based on the supersets of the precise quotient set.

One problem of the zero-finding by XIN is that there could be a cluster ofredundant intervals produced for a multiple zero, where those intervals couldbe adjacent or nearby disjoint, which depends on the chosen algorithm for XIN,cf. [4]. For a pure zero-finding task, the situation of redundancy could be de-tected by extra inspection or attention. However, if the information of thezeros is to be used for further automatic computation, the redundant intervals

29

can lead to unsatisfactory numerical results. To overcome this problem, extraattention to the properties of the function f should be paid.

This work uses the algorithm in [4] to find all the zeros of Peano mono-splines. By Peano monosplines, we mean the Peano kernels regarding thequadrature rules that are constructed for proper (Riemann-)integrals. Theygenerally possess more than one multiple zero in their domains; moreover, theirzeros are generally required for deriving reliable bounds of Peano error constants.In this work, the properties of Peano monosplines as well as the computationaltechniques that are useful for the performance of XIN are discussed. Numeri-cal results are then given for Peano monosplines regarding different quadraturerules to demonstrate the improvements in the computational convergence ofXIN.

References:

[1] G. Alefeld, J. Herzberger, Introduction to Interval Computations,Academic Press, New York, 1983.

[2] U.W. Kulisch, Computer Arithmetic and Validity — Theory, Implemen-tation, and Applications, de Gruyter, Berlin, 2008.

[3] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to IntervalAnalysis, SIAM, Philadelphia, 2009.

[4] C.-Y. Chen, Extended interval Newton method based on the precisequotient set, Computing, 92 (2011), No. 4, pp. 297-315.

[5] U.W. Kulisch, Arithmetic operations for floating-point intervals, as Mo-tion 5 accepted by the IEEE Standards Committee P1788 as definition ofthe interval operations (2009), see [6].

[6] J. Pryce (ed), P1788: IEEE standard for interval arithmetic version02.2, http://grouper.ieee.org/groups/1788/email/pdfOWdtH2mOd9.pdf(2010)

[7] C.-Y. Chen, A performance comparison of the zero-finding by extendedinterval Newton method for Peano monosplines, preprint.

30

Numerical comparison of some verified

approaches for approximate integration

Chin-Yun Chen

Department of Applied Mathematics, NCYU,No 300, University Rd,Chiayi 600, [emailprotected]

Keywords: interval arithmetic, enclosure methods, verified bounds, numer-ical integration

Approximate computation of the definite integral I(f) =∫B f(~x) d~x over an

n-dimensional interval B ∈ IRn, n ∈ N, is an essential task in different fields ofscience and engineering. Traditional approaches for the numerical integrationI(f) ≈ S(f) =

∑Ni=1 wif(~xi) generally use the null rules for error estimation,

i.e. E(f) := I(f) − S(f) ≈ S(f) − S(f), where S(·) and S(·) are integrationrules with deg S > degS and deg S := maxn ∈ N |S(xn) = I(xn). Differentfrom the traditional approaches, verified integrators enclose the discretizationerror E(f) reliably; more precisely, I(f) ∈ [I(f)] = [S(f)] + [E(f)]. Due to thisdifference, the numerical integrators that are based on interval arithmetic can besuperior in efficiency to the conventional approaches, especially when oscillatingintegrands are considered, cf. [1,2]. Moreover, verified integrators also can berelatively effective when conventional integrators encounter difficulties, cf. [1,2].

In the literature, there are different verified approaches discussed for thenumerical integration I(f) ≈ I(p) or I(f) ≈ S(f), where p is an interpolationpolynomial of f . The approximation I(f) ≈ I(p) is considered for example bythe Taylor model method in [3], where p is a Taylor polynomial of f . Those ver-ified approaches differ in the approximation rules, the ways of error estimation,and/or the adaptive strategies. All their efforts mainly focus on reducing thewidth of error enclosures. The methods of error estimation that are consideredin verified integrators include the derivative free method for analytic functions(cf. [1,4,5]), the Taylor model method by one or more higher (partial) deriva-tives of a fixed order (cf. [3]), the classical error bounds of the highest orders(cf. [6]), and the adaptive error estimation by the Peano-Sard kernel method(cf. [2,7,8]). The importance of the Peano-Sard kernel method is that it suppliesmultiple error estimates for each integration rule of a higher degree, which canbe realized by interval arithmetic for sufficiently smooth integrands.

31

It is known that the classical error bounds of the highest orders, dependingon the functional behavior of the integrands, are not always practical for errorestimation, cf. [7,8,9]. This work gives numerical comparison of some verifiedapproaches for approximate integration that do error estimation by the Taylormodel method in [3], the derivative free method in [4] and the Peano-Sard kernelmethod in [2,8].

References:

[1] K. Petras, Self-validating integration and approximation of piecewiseanalytic functions, J. Comput. Appl. Math., 145 (2002), pp. 345–359.

[2] C.-Y. Chen, Bivariate product cubature using Peano kernels for localerror estimates, J. Sci. Comput., 36 (2008), No. 1, pp. 69–88.

[3] M. Berz, K. Makino, New methods for high-dimensional verified quadra-ture, Reliable Computing, 5 (1999), pp. 13–22.

[4] K. Petras, Principles of verified numerical integration, J. Comput. Appl.Math., 199 (2004), pp. 317–328.

[5] M. C. Eiermann, Automatic, guaranteed integration of analytic func-tions, BIT, 29 (1989), pp. 270–282.

[6] U. Storck, An adaptive numerical integration algorithm with automaticresult verification for definite integrals, Computing, 65 (2000), pp. 271–280.

[7] C.-Y. Chen, Computing interval enclosures for definite integrals by appli-cation of triple adaptive strategies, Computing, 78 (2006), No. 1, pp. 81–99.

[8] C.-Y. Chen, On the properties of Sard kernels and multiple error esti-mates for bounded linear functionals of bivariate functions with applica-tion to non-product cubature, Numer. Math., accepted.

[9] C.-Y. Chen, Verified computed Peano constants and applications in nu-merical quadrature, BIT, 47 (2007), No. 2, pp. 297–312.

32

Numerical probabilistic analysis under

aleatory and epistemic uncertainty

Boris S. Dobronets and Olga A. Popova

Siberian Federal University79, Svobodny Prospect

660041 Krasnoyarsk, [emailprotected], [emailprotected]

Keywords: numerical probabilistic analysis, epistemic uncertainty, secondorder histogram

Many important practical problems involve different uncertainty types. Inthis paper, we consider Numerical Probabilistic Analysis (NPA) for problemsunder so-called epistemic uncertainty that characterizes a lack of knowledgeabout a considered value. Generally, epistemic uncertainty may be inadequateto “frequency interpretation”, typical for classical probability and for uncer-tainty description in traditional probability theory. Instead, epistemic uncer-tainty can be specified by a “degree of belief”. Alternative terms to denoteepistemic uncertainty are “state of knowledge uncertainty”, “subjective uncer-tainty”, “irreducible uncertainty”. Sometimes, processing epistemic uncertaintymay require the use of special methods [1].

In our work, we develop a technique that uses Numerical Probabilistic Anal-ysis for decision making under epistemic uncertainty of probabilistic nature.One more application of Numerical Probabilistic Analysis is to solve variousproblems with stochastic data uncertainty.

The basis of NPA is numerical operations on probability density functionsof the random values. These are operations “+”, “−”, “·”, “/”, “↑”, “max”,“min”, as well as binary relations “≤”, “≥” and some others. The numericaloperations of the histogram arithmetic constitute the major component of NPA.It is worthwile to note that the idea of numerical histogram arithmetic has beenfirst implemented in the work [2].

Notice that the density function can be a discrete function, a histogram(piecewise constant function), and a piecewise-polynomial function.

Next, we consider the concepts of natural, probabilistic and histogram ex-tensions of function. We outline the numerical algorithms for constructing suchextension for some classes of function [3].

33

Using the arithmetic of probability density functions and probabilistic ex-tensions, we can construct numerical methods that enable us solving systems oflinear and nonlinear algebraic equations with stochastic parameter [4].

To facilitate more detailed description of the epistemic uncertainty, we in-troduce the concept of second order histograms, which are defined as piecewisehistogram functions [5]. The second order histograms can be constructed usingexperience and intuition of experts.

Relying on specific practical examples, we show that the use of the secondorder histograms may prove very helpful in decision making. In particular, weconsider risk assessment of investment projects, where histograms of factors suchas Net Present Value (NPV) and Internal Rate of Return (IRR) are computed.

References:

[1] L.P. Swiler, A.A.Giunta, Aleatory and epistemic uncertainty quantifi-cation for engineering applications, Sandia Technical Report, SAND2007-2670C.

[2] V.A.Gerasimov, B.S.Dobronets, and M.Yu. Shustrov, Numericaloperations of histogram arithmetic and their applications, Automation andRemote Control, 52 (1991), No. 2, pp. 208–212.

[3] B.S.Dobronets, O.A.Popova, Numerical probabilistic analysis andprobabilistic extension, Proceedings of the XV International EM2011 Con-ference, O. Vorobyev, ed., SFU, RIFS, Krasnoyarsk, 2011, pp. 67–69.

[4] B.S. Dobronets, O.A. Popova, Numerical operations on random vari-ables and their application, Journal of Siberian Federal University. Math-ematics & Physics, 4 (2011), No. 2, pp. 229–239.

[5] B.S. Dobronets, O.A. Popova, Histogram time series, Proceedingsof the X International FAMES2011 Conference, O. Vorobyev, ed., RIFS,SFU, KSTEI, Krasnoyarsk, 2011, pp. 127–130.

34

Model predictive control of discrete

linear systems with interval and

stochastic uncertainties

Vladimir V. Dombrovskii and Elena V. Chausova

Tomsk State University36, Lenin ave.

634050 Tomsk, [emailprotected], [emailprotected]

Keywords: linear dynamic system, interval uncertainty, stochastic uncer-tainty, model predictive control, convex optimization, linear matrix inequalities

The work examines the problem of model predictive control for an uncertainsystem containing both interval and stochastic uncertainties. We consider alinear dynamic system described by the following equation:

x(k+ 1) =

(A0(k) +

n∑

j=1

Aj(k)wj(k)

)x(k) +

(B0(k) +

n∑

j=1

Bj(k)wj(k)

)u(k),

k = 0, 1, 2, . . . . (1)

Here, x(k) ∈ Rnx is the state of the system at time k (x(0) are assumed to beavailable); u(k) ∈ R

nu is the control input at time k; wj(k), j = 1, . . . , n, areindependent white noises with zero mean and unit variance; Aj(k) ∈ Rnx×nx ,Bj(k) ∈ Rnx×nu , j = 0, . . . , n, are the state-space matrices of the system.

The elements of the state-space matrices are known not exactly, and we haveonly the intervals of their possible values:

Aj(k) ∈ Aj , Bj(k) ∈ Bj , j = 0, . . . , n, k ≥ 0, (2)

where Aj ∈ IRnx×nx ,Bj ∈ IR

nx×nu , j = 0, . . . , n; IR is the set of the realintervals x = [x, x], x ≤ x, x, x ∈ R.

Model predictive control [1] involves the on-line solution of an optimizationproblem to determine, at each time instant, a fixed number of optimal futurecontrol inputs. Although more than one control move is calculated only thefirst one is implemented. At the next sampling time, the state of the system ismeasured, and the optimization is repeated.

35

Allowing for two uncertainty types (interval and stochastic) present in thesystem (1), we consider the following performance objective:

min

u(k+i|k), i=0,...,m−1,

max J(k),

Aj(k+i)∈Aj , Bj(k+i)∈Bj ,

j=0,...,n, i≥0,

where

J(k) = E

∞∑

i=0

(x(k + i|k)TQx(k + i|k) + u(k + i|k)TRu(k + i|k)

) ∣∣∣∣ x(k)

.

E ·|· denotes the conditional expectation; Q ∈ Rnx×nx , Q = QT ≥ 0,R ∈ R

nu×nu , R = RT > 0, are given symmetric weighting matrices; u(k + i|k)is the predictive control at time k + i computed at time k, and u(k|k) is thecontrol move implemented at time k; x(k + i|k) is the state of the system attime k + i derived at time k by applying the sequence of predictive controlsu(k|k), u(k + 1|k), . . . , u(k + i− 1|k) on the system (1), and x(k|k) is the stateof the system measured at time k; m is the number of control moves to becomputed, u(k + i|k) = 0 for all i ≥; A > 0 (A ≥ 0) means that A is a positivedefinite (semi-definite) matrix.

We compute the optimal control according to the linear state-feedback law:

u(k + i|k) = F (k)x(k + i|k), i ≥ 0, (3)

where F (k) ∈ Rnu×nx is the state-feedback matrix at time k.We solve the above problem by using linear matrix inequalities [2], as this

has been done in [1]. At each time instant k, we solve an eigenvalue problem inorder to calculate the state-feedback matrix F (k) in the control law (3) whichminimizes the upper bound on J(k). As a result, we get the optimal robustcontrol strategy providing the system with stability in the mean-square sense.

References:

[1] M.V. Kothare, V. Balakrishnan, M. Morari, Robust constrainedmodel predictive control using linear matrix inequality, Automatica, Vol. 32(1996), No. 10, pp. 1361–1379.

[2] S. Boyd, L. Ghaoui, E. Feron, V. Balakrishnan, Linear MatrixInequalities in System and Control Theory, SIAM, Philadelphia, 1994.(Studies in Applied Mathematics, vol. 15)

36

Numerical verification and

experimental validation of

sliding mode control design for

uncertain thermal SOFC models

Thomas Dotschel1, Andreas Rauh1, Ekaterina Auer2,and Harald Aschemann1

1Chair of Mechatronics, University of RostockD-18059 Rostock, Germany

Thomas.Doetschel,Andreas.Rauh,[emailprotected] of Engineering, INKO, University of Duisburg-Essen

D-47048 Duisburg, [emailprotected]

Keywords: interval-based sliding mode control, numerical verification, ex-perimental validation, real-time implementation

The dynamics of high-temperature solid oxide fuel cell (SOFC) systems canbe mainly described by their thermal, fluidic, and electro-chemical behavior. Inmodeling for control purposes, it is essential to focus especially on the thermalsubsystem which represents the most dominant system part. The admissibilityof control strategies for SOFCs is usually characterized by limitations on themaximum fuel cell temperature and on the spatial and temporal variation ratesof the internal stack temperature distribution. These constraints are introducedto minimize mechanical strain due to different thermal expansion coefficients ofthe stack materials and to reduce degradation phenomena of the cell materials.

Control-oriented models for the thermal behavior of SOFC systems are givenby ordinary differential equations (ODEs). They can be derived from the firstlaw of thermodynamics for nonstationary processes and represent integral bal-ances of the inflow and outflow of energy, which determine the internal energy.In addition, the internal energy can be directly linked to the temperature of thestack module. The preceding fundamental modeling procedure can be modi-fied to account for the spatial temperature distribution in the interior of a stackmodule by means of a finite volume semi-discretization. The corresponding non-linear system models describe, firstly, the transient behavior during the heatingphase, secondly, the influence of variable electrical loads during usual system

37

operation, and, finally, the transient cooling process during the shutdown phaseof the system.

The parameter intervals and non-verified parameter estimates that have beenidentified by the procedures presented in [2] provide the basis for the design ofrobust controllers. To obtain such a controller, we use an extension of classicalsliding mode control making use of a suitable Lyapunov function to stabilizethe system dynamics despite possible bounded uncertainty in the system pa-rameterization and a-priori unknown disturbances. A first simulation studywas published in [1].

In this contribution, we extend our considerations in such a way that the en-thalpy flow of the cathode gas into the stack module is defined as a control inputfor the thermal behavior. This enthalpy flow can be influenced by manipulatingthe air mass flow as well as the temperature difference between the supplied airin the preheating unit and the inlet elements of the fuel cell stack module. Ifthe above-mentioned sliding mode control procedure is employed to determinethe enthalpy flow, further physical restrictions have to be accounted for. Theserestrictions result from the admissible operating ranges of both the valve forthe air mass flow and the temperature of the preheating unit. Moreover, thevariation rate of the temperature difference between the preheating unit andthe stack module’s inlet elements has to be restricted to prevent damages dueto thermal stress. These feasibility constraints are taken into account using anappropriate cost function which is evaluated along with the design criteria forthe guaranteed stabilizing interval-based sliding mode controller.

Employing the results for the interval-based verified parameter identifica-tion, we present both numerical simulations and experimental results, the lattervalidating the control procedures for the SOFC test rig which is available at theChair of Mechatronics at the University of Rostock.

References:

[1] T. Dotschel, A. Rauh, and H. Aschemann, Reliable Control andDisturbance Rejection for the Thermal Behavior of Solid Oxide Fuel CellSystems, Proc. of Vienna Conference on Mathematical Modelling, Vienna,Austria, 2012, http://www.IFAC-PapersOnLine.net (accepted).

[2] A. Rauh, T. Dotschel, E. Auer, and H. Aschemann, IntervalMethods for Control-Oriented Modeling of the Thermal Behavior of High-Temperature Fuel Cell Stacks, Proc. of 16th IFAC Symposium on SystemIdentification SysID 2012, Brussels, Belgium, 2012.

38

Limitations of complex interval

Gauss-Seidel iterations

Vadim S. Dronov

Altai State University61, Lenin str.

656049 Barnaul, [emailprotected]

Keywords: interval analysis, verified computing

A necessity to solve computational problems in case of incomplete and un-certain input data has been one of the main reasons for emerging the intervalanalysis. Nowadays, interval methods are well-developed for the data describedby real intervals. However, some practical problems bring to life models withthe similar type of uncertainty (bounded within some intervals), with the databeing complex. Good examples are meso-mechanic algorithms in physics [1],estimation of dynamic functions (like heat transfer function in [2]), and so on.Consequently, the computation methods for complex-valued models is a majorissue.

In this work, we are trying to generalize Gauss-Seidel iteration method fromreal intervals to complex intervals and show their possible limitations.

Our basic interval object is a set of circular complex intervals 〈c, r〉 = x ∈C : |x− c| ≤ r . There are no “standard definition” for a complex interval,and different tasks require different basic objects. Basic system is the system oflinear equations Ax = b, where A is an interval n × n-matrix, b is an intervalvector.

We work with one kind of solutions sets, i.e. the so-called united solutionsset:

Ξuni(A, b) = x ∈ Cn | (∃A ∈ A)(∃b ∈ b)(Ax = b),

Statement. Classic Gauss-Seidel iteration method can be generalized for thecomplex case with minimal problems. (This require replacement of real intervaloperations by complex ones, and replacing the exact intersection of circularintervals by hull of them only).

Theorem 1. Complex analogues of Gauss-Seidel iterations method still func-tion, i.e. do not deteriorate outer estimation of solutions set at any step andstill produce outer estimate of solution set as a result.

39

The efficiency of interval Gauss-Seidel iterations in the real case is limitedby Neumaier theorem [3], which states that the method works only with theso-called interval H-matrices. The complex case also has limitations, which weformulate below.

In the sequel, an interval n×n-matrix will be called circular trace dominantmatrix (CTD-matrix), if, for any n-dimensional non-zero interval vector u withmid u = 0, the condition

∣∣∣∣∑

i6=j

aijuj

∣∣∣∣ < |aiiui|

is true for every i.

Theorem 2. If, in the system of equations Ax = 0, the matrix A is not anCTD-matrix, then there exists a starting interval x of any width that cannotbe improved by Gauss-Seidel iterations.

Definition. We call the interval n× n-matrix A strongly different from CTD-matrices, with difference coefficient τ , if a vector U , mid U = 0, exists for which|∑i6=j aijU j | > τ |aiiU i|.Statement. If, in the system Ax = b, the matrix A is strongly differentfrom CTD-matrix, and the difference coefficient is large enough, there exists astarting interval approximation of any width that are “improvement-resistant”for Gauss-Seidel iterations.

The main difference with the real case, however, is the following theorem,that substantially narrows the applicability of the Gauss-Seidel iterations in caseof complex intervals:

Theorem 3. The class of CTD-matrices is empty.

References:

[1] O. Dessombz, F. Thouverez, J.-P. Laine, L. Jezequel, Analysis ofmechanical systems using interval computations applied to finite elementsmethods, Journal of Sound and Vibration, 2001, No. 5, pp. 949–968.

[2] Y. Candau, T. Raissi, N. Ramdani, L. Ibos, Complex interval arith-metic using polar form, Reliable Computing, 12 (2006), No. 1, pp. 1–20.

[3] A. Neumaier, Interval Methods for Systems of Equations, CambridgeUniversity Press, Cambridge, 1990.

40

Endpoint and midpoint interval

representations – theoretical and

computational comparison∗

Tomas Dzetkulic

Institute of Computer Science, Academy of Sciences of the Czech RepublicPod Vodarenskou vezı 2

182 07 Prague 8, Czech [emailprotected]

Keywords: interval arithmetic, interval format, high precision

In classical interval analysis [3], a real value x is in a digital computer rep-resented by an interval x ∈ [xlo, xhi] where xlo and xhi are two floating pointnumbers. There are further possible representations of the value of x using twoor three floating point numbers:

• x ∈ [xmid − e, xmid + e] using two floating point numbers xmid and e,

• x ∈ [xmid + elo, xmid + ehi] using three floating point numbers xmid,elo and ehi.

To motivate our work, let us consider an example where x = 1/15. Usingthe classical interval format, the tightest possible interval that contains x usingstandard double precision floating point format [2] is

[6.66666666666666657415× 10−2, 6.66666666666666796193× 10−2].

The width of this interval is approximately 1.387779×10−17. Using the alterna-tive representation with xmid

.= 6.66666666666666657415×10−2, we can obtain

e = ehi.= 9.252 × 10−19; elo = 0. With either one of the alternative represen-

tations, we achieve an order of magnitude tighter enclosure of the actual valueof x. Moreover, if we allow midpoint to lie outside the interval, we can set eloclose to ehi to get an interval width less than 10−30 using just three floatingpoint numbers.

∗This work was supported by Czech Science Foundation grant 201/09/H057, Ministryof Education, Youth and Sports project number OC10048 and long-term financing of theInstitute of Computer Science (RVO 67985807). The author would like to thank StefanRatschan for a valuable discussion and helpful advice.

41

For midpoint intervals, the optimal error can be estimated based on the workof Dekker [1]. Intervals of the form [xmid − e, xmid + e] with such optimal errorestimation were used in [5] but no theoretical comparison with classical intervalanalysis was given. On the other hand, theoretical comparison in [4] is based onsuboptimal error estimation. In our work we compare midpoint and endpointintervals using the optimal error estimation. Moreover, we introduce intervalsof the form [xmid +elo, xmid +ehi] and we show that, in case of narrow intervals,both alternative forms provide tighter enclosures compared to the classical in-terval form. We also compare all interval representations using computationalbenchmarks.

References:

[1] T.J. Dekker, A floating-point technique for extending the available pre-cision, Numerische Mathematik, 18 (1971/72), pp. 224–242.

[2] IEEE standard for binary floating-point arithmetic, (Technical ReportIEEE Std 754-1985), The Institute of Electrical and Electronics Engineers,1985.

[3] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to IntervalAnalysis, SIAM, Philadelphia, 2009.

[4] S.M. Rump, Fast and parallel interval arithmetic, BIT, 39, pp. 534–554.

[5] A. Wittig, M. Berz, Rigorous high precision interval arithmetic inCOSY INFINITY, Proceedings of the Fields Institute, 2009.

42

Rigorous computation with function

enclosures in Chebyshev basis∗

Tomas Dzetkulic

Institute of Computer Science, Academy of Sciences of the Czech RepublicPod Vodarenskou vezı 2

182 07 Prague 8, Czech [emailprotected]

Keywords: initial value problem, rigorous integration, Chebyshev basis

When rigorously computing with a real continuously differentiable function,a Taylor polynomial is commonly used to replace the actual function. TheTaylor polynomial remainder is bounded to create a conservative enclosure ofthe function. One of the applications of such a rigorous function enclosure lies inverified algorithms for integration of nonlinear ordinary differential equations [4].

In this paper, we present a multivariable function enclosure using the Cheby-shev polynomial instead of the Taylor polynomial. Since the Chebyshev seriesconverge faster for all analytic functions compared to the Taylor series, ourfunction enclosures approximate real analytic functions with tighter remainderintervals.

In the existing works on Chebyshev polynomials [1,2], only operations withfunctions in one variable are described. In [1], the function approximation isstored in the form of function values in the Chebyshev nodes. The authors usenon-rigorous methods to compute coefficients of Chebyshev polynomials, andno enclosure of the exact function value is guaranteed. On the other hand,the authors in [2] use rigorous methods, but only addition, multiplication andcomposition of one variable functions are presented.

We present an efficient algorithm for rigorous addition, substraction, mul-tiplication, division, composition, integration and derivative of multi-variableChebyshev function enclosures. Our publicly available implementation [3] sup-ports function enclosures based on both Taylor and Chebyshev polynomialsand allows their comparison. Computational experiments with the initial value

∗This work was supported by Czech Science Foundation grant 201/09/H057, Ministryof Education, Youth and Sports project number OC10048 and long-term financing of theInstitute of Computer Science (RVO 67985807). The author would like to thank StefanRatschan for a valuable discussion and helpful advice.

43

problem of ordinary differential equations show that the approach is competitivewith the best publicly available verified solvers.

References:

[1] Z. Battles and L. N. Trefethen, An extension of MATLAB to con-tinuous functions and operators, SIAM J. Sci. Comput., 25 (2004), No. 5,pp. 1743–1770.

[2] N. Brisebarre and M. Joldes, Chebyshev interpolation polynomial-based tools for rigorous computing, In Proceedings of the 2010 Inter-national Symposium on Symbolic and Algebraic Computation ISSAC’10,ACM, New York, 2010, pp. 147–154.

[3] T. Dzetkulic, http://odeintegrator.sourceforge.net, 2012, soft-ware package ODEIntegrator.

[4] K. Makino and M. Berz, Rigorous integration of flows and ODEs usingTaylor models, Symbolic Numeric Computation, 2009, pp. 79–84.

44

Solving the Table Maker’s Dilemma

by reducing divergence on GPU

Pierre Fortin and Mourad Gouicem and Stef Graillat

UPMC Univ Paris 06 and CNRS UMR 7606, LIP6,4 place Jussieu, F-75252, Paris cedex 05, France

pierre.fortin, mourad.gouicem, [emailprotected]

Keywords: Table Maker’s Dilemma, Graphical Processing Unit, correctrounding, elementary functions

The IEEE 754-2008 standard recommends correctly rounding elementaryfunctions. However, these functions are transcendental and their results canonly be approximated with error ǫ > 0. If p is a rounding function at precisionp, there may exist some arguments x, called (p, ǫ) hard-to-round arguments, suchthat p(f(x) − ǫ) 6= p(f(x) + ǫ), inducing an uncertainty on the rounding off(x). Finding an error ε such that there are no (p, ε) hard-to-round argumentsis known as the Table Maker’s Dilemma (TMD).

There exist two major algorithms to solve the TMD for elementary func-tions which are Lefvre’s and SLZ algorithms [2, 3]. The most computationallyintensive step of these algorithms is the (p, ǫ) hard-to-round argument searchsince its complexity is exponential in the size of the targeted format. It takesfor example several years of computation to get all of them for the classic ex-ponential function in double precision and the same holds for all other classicalelementary functions. Hence, getting (p, ǫ) hard-to-round arguments is a chal-lenging problem. In order to obtain these (p, ǫ) hard-to-round arguments forlarger formats (extended precision, quadruple precision), the implemented al-gorithms should be able to efficiently operate on petaflops systems. In the longterm, we would expect to require the correct rounding of some functions in thenext versions of the IEEE 754 standard, which will allow to completely specifyall the components of numerical software.

High-performance computing systems increasingly rely on many-core chipssuch as Graphical Processing Units (GPU), which present a partial SIMD exe-cution (Single Instruction Multiple Data). However, when the control flows ofthe threads on a SIMD unit diverge, the execution paths are serialized. Hence,in order to efficiently use GPU, one has thus to avoid divergence, i.e. manageto have regular control flow within each group of threads executed on the same

45

SIMD unit.

This work is a first step for solving the TMD on many-core architectures.We focused on Lefevre’s algorithm [2] as it is efficient for double precision.Also, it is embarrassingly parallel and fine-grained which makes it suitable forGPU. We first deployed this algorithm on GPU using the most efficient (toour knowledge) implementation techniques [5]. Then we redesigned it usingthe concept of continued fractions. This made it possible to obtain a betterunderstanding of Lefevre’s algorithm and a new algorithm which is much moreregular. More precisely, we strongly reduce two major sources of divergenceof Lefevre’s algorithm: loop divergence and branch divergence. Compared tothe reference implementation of Lefevre’s algorithm on a single high-end CPUcore, the deployment of Lefevre’s algorithm on an NVIDIA Fermi GPU offers aspeedup of 15x whereas the new algorithm enables a speedup of 52x.

References:

[1] J.M. Muller, N. Brisebarre, F. de Dinechin, C.P. Jeannerod,V. Lefevre, G. Melquiond, N. Revol, D. Stehle, S. Torres,Handbook of Floating-point Arithmetic, Birkhauser, 2009.

[2] V. Lefevre, New Results on the Distance Between a Segment and Z2.Application to the Exact Rounding, Proceedings of the 17th IEEE Sym-posium on Computer Arithmetic, 2005, pp. 68–75.

[3] D. Stehle, V. Lefevre, Paul Zimmermann, Searching worst casesof a one-variable function using lattice reduction, IEEE Transactions onComputers, 54 (2005), pp. 340–346.

[4] A. Ziv, Fast evaluation of elementary mathematical functions with cor-rectly rounded last bit, ACM Trans. Math. Softw., 17 (1991), pp. 410–423.

[5] P. Fortin, M. Gouicem, S. Graillat, Towards solving the TableMaker’s Dilemma on GPU, Proceedings of the 20th International Euromi-cro Conference on Parallel, Distributed and Network-based Processing,2012, pp. 407–415.

46

Efficient angle summation algorithm

for point inclusion test

and its robustness

Stepan Gatilov

Novosibirsk State University2, Pirogova st.

630090 Novosibirsk, [emailprotected]

Keywords: point inclusion, point-in-polygon, angle summation, windingangle, bounding volume hierarchy, numerical stability

The point inclusion test is a classical problem of computational geometry.The problem statement is: given a two-dimensional domain bounded by a piece-wise smooth Jordan curve, determine whether a certain point belongs to it.The boundary curve is comprised of a sequence of smooth curvilinear edges. Itshould be noted that the classical problem definition considers only a polygonalboundary described by sequence of its vertex points.

An excellent survey of the classical point-in-polygon methods is given in[1]. In the conclusion it advises to “avoid the angle summation test like theplague” due to its high constant factor in the time complexity. The most notableother methods are the ray intersection test and the test based on barycentriccoordinates. The robustness of these methods is studied in [2]. The barycentriccoordinate test is shown to be unstable when the point lies on a diagonal ofthe polygon. The ray intersection test potentially can fail when the ray passesthrough a vertex, although this instability can be completely eliminated for theclassical problem definition.

The geometric data in computer aided design is often imprecise. The bound-ary is represented by individual edges and the incident vertices of any two con-secutive edges may differ up to the tolerance value. This gap between incidentvertices renders the ray intersection and barycentric coordinate tests unstable.However, the angle summation method continues to be backwards stable.

The purpose of this work is twofold: first, analyze the stability of the anglesummation algorithm; and second, introduce a preprocessing optimization forthe many-points-and-one-domain scenario.

47

The numerical behavior of the angle summation algorithm is analyzed. Tocalculate the angle between vectors, the FPATAN x87 instruction (atan2) isused for maximum accuracy. Line segments and circular arcs are consideredas the edges. It is proven that the answer of the point inclusion query mustbe correct given that the point is far enough from the boundary. The answerremains correct even if the edges are perturbed slightly, potentially introducinggaps between incident vertices.

With some preprocessing, the subsequent angle summation queries can beaccelerated significantly as follows. An axis-aligned bounding box is calculatedfor each edge, and a bounding volume hierarchy (BVH) is constructed fromall of them. Angle summation queries are processed by traversing the BVHrecursively. The winding angle is calculated instantly for a BVH node if thepoint lies outside of the corresponding box.

Given that all the boxes are tight, the algorithm works in O(K log nK +KT )

time, where T is the amount of time required to calculate the winding anglefor a single edge, and 2πK is the “absolute winding angle” of the boundary(assuming K ≤ n). The absolute winding angle is a sum of unsigned windingangles for all the infinitesimal pieces of the boundary. It is supposed that oftenK ≪ n in practice. For instance, for any convex domain K = 1, which meansthat queries takes optimal O(log n + T ) time. Some upper bounds for K aregiven.

References:

[1] E. Haines, Point in polygon strategies, Graphics Gems IV, ed. Paul Heck-bert, Academic Press, 1994, pp. 24–46.

[2] S. Schirra, How reliable are practical point-in-polygon strategies? Pro-ceedings of the 16th Annual European Symposium on Algorithms, Springer-Verlag, Berlin, Heidelberg, 2008, pp. 744–755.

48

Subinterval analysis. First results

Alexander Harin

Modern University for the Humanities32, Nizhegorodskaya str.109029 Moscow, [emailprotected]

Keywords: incomplete information, large databases, Internet, economics

The report is devoted to subinterval analysis as a new direction, branch of in-terval analysis. Subinterval analysis or subinterval weights analysis was foundedin [1]. It deals usually with weights as whole characteristics of subintervals.

1. Subinterval arithmetic

Suppose a finite quantity or function w(xk) is defined on an interval XTotal

and is known within the accuracy of adjacent subintervals Xs : s = 1, 2, ..., S: 1 < S < ∞, Xs < Xs + 1, of XTotal ≡ X1..S. At that, many characteristics,such as moments (mean, dispersion, etc.) of w(xk) are the interval ones.

Let us define the weight of Xs as

whtXs ≡ ws ≡∑

xk∈Xs, xk 6∈Xs+1

w(xk)

Subinterval arithmetic calculate and rigorously evaluate characteristics ofquantities, intervals and subintervals, e.g., by the ”Ring of formulas” for widthswidMTotal of interval MTotal of mean of w(xk)

widMTotal =

S∑

s=1

widXs ws = widX1..S −S∑

s=1

widXs

∑

n=1,...S|n6=s

wn =

= widXTotal −S∑

s=1

ws

∑

n=1,...S|n6=s

widXn

2. Subinterval analysis of inexact information2.1. Decision making

If the width and weight of any subinterval cannot be less than nonzero values,then nonzero ruptures exist between the interval MTotal of mean value and thebounds of XTotal (see [1]). These ruptures explain basic utility paradoxes.

49

2.2. Global optimization

An analog of Lipschitz’s condition may be defined for weights of elementarysubintervals and subboxes XElem,s, XElem,t : XElem,s ∩XElem,t 6= ∅

|whtXElem,s − whtXElem,t| ≤ ∆wht ≡ ∆w

allowing discontinuity of the function and revealing new relations.

3. Subinterval analysis of exact but incomplete information3.1. Theorem of interval character of incomplete knowledge

If a finite nonnegative quantity is exactly known everywhere except twopoints, the distance between them is nonzero and the values of the quantity inthem may vary not less than over a nonzero interval, then any moment of thequantity is known within the accuracy not better than a nonzero interval.

This theorem extends essentially the realm of interval analysis applications.

4. Subinterval approximation of exact informationthrough time, space, ...

4.1. Large databases

A ternary subinterval one-dimentional picture, image needs only 2 numbersas the coordinates of two intersections of 3 subintervals. The picture of N -dimensional plot of 1000N bytes needs only 2N bytes.

5. Applications: Accounting. Macroeconomics. Economics.Population analysis. Recognition. Internet.

Accounting is a natural application of time subintervals as months, quartersfor gain, profit, etc. Audit incomplete knowledge can be processed by subintervalanalysis. Macroeconomics is a natural application of space subintervals as town,sity, province, state, etc. Populations subintervals as sex, age, profession, wage,etc. may be used. Subinterval images and pictures may be used for preliminaryanalysis and recognition and greatly accelerate them in large databases. Internetis a prospective field for subinterval analysis also.

References:

[1] A.A. Harin, About possible additions to interval arithmetic, Proceed-ings of the X International FAMES2011 Conference, Krasnoyarsk, 2011,http://eventology-theory.ru/0-lec/X-fames2011-24-October.pdf,pp. 356–359 (in Russian).

50

Theorem on interval character

of incomplete knowledge.

Subinterval analysis

of incomplete information

Alexander Harin

Modern University for the Humanities32, Nizhegorodskaya str.109029 Moscow, [emailprotected]

Keywords: interval analysis, incomplete information, durable processes

Theorem. If a finite nonnegative quantity is defined on a finite segmentand is exactly known everywhere except two points, the distance between thesepoints is nonzero and the values of the quantity in these points may vary notless than over a nonzero interval, then any moment of the quantity, includingthe mean and the dispersion of the quantity, is known within the accuracy notbetter than another nonzero interval.

The proof is quite simple (see [1]), but the theorem enlarges the intervalanalysis to the fields of exact but incomplete knowledge, of planning and controlof durable measurements, researches, business, work and other processes, etc.

Analysis example 1. At equal widths widXs = widX1 of subintervalsXs, of a total interval XTotal ≡ X1..S we obtain from Novoselov formula (see[1]), for the width widMTotal of the interval MTotal of the total mean value

widMTotal =

S∑

s=1

widXs ws = widX1 =widX1..S

S≡ widXTotal

S

To prove rigorously this simple but, strictly speaking, not obvious conclusionwe do not need any information about weights of subintervals or of interval.

Analysis example 2. Assume the width widXFirst = 2 and the weightwFirst = 0.7 of only a single or first subinterval XFirst = [2, 4] of a total intervalXTotal = [A,B] = [0, 10] are known (see Fig. 1). Then from Ring of formulas(see [1]) for the interval MTotal of the total mean value

wFirstwidXFirst ≤ widMTotal

51

widMTotal ≤ widXTotal − wFirst(widXTotal − widXFirst)

MTotal ≥ XTotal + wid (XFirst −XTotal) wFirst = 0 + 2× 0.7 = 1.4

MTotal ≤ XTotal − wid (XTotal −XFirst) wFirst = 10− 6× 0.7 = 5.8

2× 0.7 = 1.4 ≤ widMTotal ≤ 10− 0.7× 8 = 4.4

A w First=0,7 B

0 1 2 X First 4 5 6 7 8 9 10

2-0= 2 10-4= 6

2*0,7= 1,4 6*0,7= 4,2

0+1,4= 1,4 10-4,2= 5,8

1,4 M Total 5,8

Figure 1: An illustrative example of calculations of the interval MTotal

of mean value with the help of the only (or the first) measurement.

Note, although we use the incomplete information, all evaluations of theboth examples are rigorous and exact as usually in the interval analysis.

References:

[1] A.A. Harin, Theorem of interval character of incomplete knowledge. Itsapplications to planning of experiments, Moscow Science Review, Vol. 16(2011, No. 12), pp. 3–5 (in Russian).

52

Arithmetic and algebra

of mapped regular pavings

Jennifer Harlow1, Raazesh Sainudiin1,2 and Warwick Tucker3

2Laboratory for Mathematical Statistical Experiments and1Department of Mathematics and Statistics,University of Canterbury, Christchurch, NZ

3Department of Mathematics,Uppsala University, Uppsala, [emailprotected]

Keywords: finite rooted binary trees, tree arithmetic and algebra

A regular paving [1,2] is a finite succession of bisections that partition a rootbox x in R

d into sub-boxes using a tree-based data structure. Such trees are alsoknown as plane binary trees [3] or finite rooted binary trees [4]. Here we extendregular pavings to mapped regular pavings which allow us to map sub-boxes ina regular paving of x to elements in some set Y. Arithmetic operations definedon Y can be extended point-wise in x and carried out in a computationallyefficient manner using Y-mapped regular pavings of x. The efficiency is dueto recursive algorithms on the finite rooted binary trees that are closed underunion operations. We provide a novel memory-efficient arithmetic over mappedpartitions based on regular pavings and develop an inclusion algebra based onintervals in a complete lattice Y over a dense class of such partitions of x basedon finite rooted binary trees.

Some application of mapped regular pavings include (i) computationally effi-cient representations of radar-observed flight co-trajectories over a busy airportthat is endowed with arithmetic for pattern-recognition [5], (ii) averaging ofhistograms in multi-dimensional nonparametric density estimation, (iii) arith-metic over a class of simple functions that are dense for continuous real-valuedfunctions, (iv) arithmetic over an inclusion algebra of interval-valued functionsto enclose locally Lipschitz real-valued functions, (v) obtaining the marginaldensity by integrating along any subset of its coordinates, (vi) obtaining theconditional function by fixing the values in the domain on a subset of coordi-nates and (vii) producing the domain with the highest coverage region.

More generally, the mapped regular pavings allow any arithmetic definedover elements in a general set Y to be extended to Y-mapped regular pavings.

53

The properties of such approximations and arithmetic operations are theorized,implemented and demonstrated with examples.

References:

[1] H. Samet, The design and analysis of spatial data structures, Addison-Wesley, Boston, 1990.

[2] L. Jaulin, M. Kieffer, O. Didrit, and E. Walter, Applied Inter-val Analysis with Examples in Parameter and State Estimation, RobustControl and Robotics, Springer-Verlag, London, 2001.

[3] R. P. Stanley, Enumerative Combinatorics, Vol. 2, Cambridge Univer-sity Press, Cambridge, 1999.

[4] J. Meier, Groups, graphs and trees: an introduction to the geometry ofinfinite groups, Cambridge University Press, Cambridge, 2008.

[5] G. Teng, K. Kuhn, and R. Sainudiin, Statistical regular pavings toanalyze massive data of aircraft trajectories, Journal of Aerospace Com-puting, Information and Communication, (2012), accepted for publication.

Verified computation of symmetric

solutions to continuous-time algebraic

Riccati matrix equations

Behnam Hashemi

Department of Mathematics, Faculty of Basic SciencesShiraz University of Technology, Shiraz, Iran

[emailprotected]

Keywords: interval arithmetic, enclosure methods, Frechet derivative

Our goal is to develop methods based on interval arithmetic which provideguaranteed error bounds for solutions of the continuous-time algebraic Riccatiequation (CARE)

R(X) = ATX +XA−XSX +Q = 0, (1)

54

where A,S and Q are given matrices in Rn×n and X ∈ Rn×n is the unknownsolution.

The severe disadvantage of the standard Krawczyk operator for the particu-lar equation (1) is that its computation needs a total cost of O(n6). An intervalNewton algorithm has been used in [2] for enclosing a symmetric solution tothe CARE (1) with a similar cost of O(n6). The following theorem is the maintheoretical basis for our modified Krawczyk operator that is more efficient toimplement.

Theorem 1 [1]. Assume that f : D ⊂ CN → CN is continuous in D. Letx ∈ D and z ∈ IC

n be such that x+ z ⊆ D. Moreover, assume that P ⊂ Cn×n

is a set of matrices containing all slopes P (x, y) for y ∈ x+ z =: x. Finally, letR ∈ Cn×n. Denote Kf (x, R, z,P) the set

Kf (x, R, z,P) := −Rf(x) + (I −RP )z : P ∈ P , z ∈ z. (2)

Then, if Kf (x, R, z,P) ⊆ intz, the function f has a zero x∗ in

x+Kf (x, R, z,P) ⊆ x.

Moreover, if P also contains all slope matrices P (y, x) for x, y ∈ x, then thiszero is unique in x.

Suppose that the closed-loop matrix A − SX is nondefective. Therefore, itsatisfies the following spectral decomposition

A− SX = V ΛW with V,Λ,W ∈ Cn×n, V W = I,

Λ = Diag(λ1, λ2, · · · , λn) diagonal.

In general the following identity holds

r′(x) = (V −T⊗W

T )·(

I ⊗ [W (A− SX)W−1]T + [V −1(A− SX)V ]T ⊗ I)

·(V T⊗W−T ),

where ⊗ stands for the Kronecker product of matrices. Hence, an approximateinverse for r′(X) is

R = (V −T ⊗WT ) ·∆−1 · (V T ⊗W−T ), (3)

where ∆ = I ⊗ΛT + ΛT ⊗ I. For any matrix X ∈ Cn×n and any vector z ∈ Cn2

we have(In2 −R(In ⊗ (A− SX)T + (A− SX)T ⊗ In)

)z =

(V −T ⊗WT ) ∆−1 Ω (V T ⊗W−1)z, (4)

55

where Ω = ∆− In ⊗(W (A− SX)W−1

)T −(V −1(A− SX)V

)T ⊗ In.

Theorem 2. Suppose that S and the solution X for the CARE (1) areboth symmetric matrices. The interval arithmetric evaluation of the Frechetderivative of R(X) contains slopes P (y, x) for all x, y ∈ x.

Formula (4) together with the above theorem are what we need for enclosingthe set (I−RP )z : P ∈ P , z ∈ z, as a part of our modified Krawczyk operatorKr(x, R, z,P) defined in (2). Here, r := r(x) denotes the vector form of theCARE (1) and R denotes our factorized preconditioner (3). Note that Ω is closeto a diagonal matrix and also the multiplication by ∆−1 can be done cheaplyvia Hadamard division. In addition, the first term −Rr(x) in (2), where Ris defined by (3 ), can be enclosed in a similar fashion. An important pointis the use of formula vec(ABC) = (CT ⊗ A)vec(B), where vec(.) denotes theoperator of stacking the columns of a matrix into a long vector. As a result,our algorithm needs only O(n3) arithmetic operations. It is mainly based onmatrix-matrix multiplications and therefore can be implemented very efficientlyin Level 3 BLAS. Numerical results will be reported at the conference.

References:

[1] A. Frommer, B. Hashemi, Verified computation of square roots of amatrix, SIAM Journal on Matrix Analysis and Applications, 31 (2009),No. 3, pp. 1279–1302.

[2] W. Luther, W. Otten, Verified calculation of the solution of algebraicRiccati equation, Developments in Reliable Computing (T. Cendes, Ed.),Kluwer, 1999, pp. 105–119.

56

Computing interval power functions

Oliver Heimlich and Marco Nehmeierand Jurgen Wolff von Gudenberg

Institute for Computer Science, University of WurzburgD 97074 Wurzburg, Germany

[emailprotected]

Keywords: interval arithmetic, elementary functions, power function

We can distinguish between four variants of the general real power functionxy depending mainly on the domain. For strictly positive values of x, e.g.,powers with arbitrary y can be computed without problems, whereas addingthe powers of 0 infiltrates the problem of determining 00. In the history ofmathematics we can find quite a few papers that support the opinion that00 = 0, but also many others that support 00 = 1. The decision for one of thesealternatives can not be taken without regarding the specific context. If thereis no such context, we propose to define that xy is undefined for (0, 0). Fornegative x, it certainly makes sense to allow integer exponents only and thusleading to a discrete domain. Nevertheless the semantics is clearly and uniquelydefined. This variant has another advantage, it equals exactly the evaluationof the complex variant applied to real inputs. Finally we discuss the variantwhich is defined for rational exponents with odd denominators. This variantmay have some applications in interval analysis, because the domain is dense inthe corresponding contiguous interval.

In this talk we discuss those four variants and try to solve the contentiousissues depending on the context. We start with a detailed analysis of the be-haviour when x or y approach ±∞ or ±0 or when x approaches 1. With thisinformation interval versions of each variant can be computed by efficient algo-rithms. For the positive case, e.g., we developed some improvements to IntLabthat reduce the runtime by 40%. For the other variants algorithms are based onthe positive version. It is, however, strange to define a function that is meantfor an interval extension on a discrete grid. The algorithms are accompanied bya rigorous treatment of rounding errors.

Last but not least we test our implementation with respect to accuracy andspeed. The former tests mainly use the multiprecision interval library MPFI,some extension of its functionality is needed. The efficiency tests compare theruntime with several other well known libraries: C-XSC, filib++ and Boost aswell as IntLab.

57

Computing

reverse interval power functions

Oliver Heimlich and Marco Nehmeierand Jurgen Wolff von Gudenberg

Institute for Computer Science, University of WurzburgAm Hubland

D 97074 Wurzburg, [emailprotected]

Keywords: interval arithmetic, elementary functions, reverse functions,power function

There are a few difficulties with the inversion of interval functions. Plaintransformations may create results that are either of low quality, i.e., by farovervalue the correct answer, or are wrong.

In this context “reverse operations” [2] act as an effective solution for theproblems encountered: A single operation shall compute an interval containingsolutions to basic equations, which comprise intervals, interval operations andoptional interval constraints.

For a (partial) binary arithmetic operation there are two binaryreverse operations on intervals, −1 : IR × IR → P(R) and −2 :IR× IR→ P(R), defined by

−1 (y, z) = x ∈ R | there exists y ∈ y with x y ∈ z and

−2 (x, z) = y ∈ R | there exists x ∈ x with x y ∈ z

with x,y, z ∈ IR.

Note that in principle we have

−1 (y, z) = z1/y

−2 (x, z) = logx z

The details are analyzed in the paper. It turns out that already for the mostrestricted domain 8 groups of inverse images are necessary depending on theoverlapping relation [1].

58

Inverse images of the other variants are more complicated and require dis-tinction between a lot more cases. With an application or extension of theoverlapping relation we show that 26 cases are sufficient. However, in mostcases the reverse interval operations produce results which can simply be com-puted as the hull of one or two intervals which are possibly to be intersectedwith the subset of even or odd integral numbers. But, for the first reverse func-tion there are even some cases where the result is a union of infinitely many andpossibly disjoint intervals.

The algorithm works as follows: At first, an enclosure of the union of themany intervals is computed, which, when intersected with x, already producesan enclosure of the result. Each boundary of this enclosure is sharp if, and onlyif, it is part of the union of many intervals. Thus, the result’s boundaries canfurther be optimized if they are not part of the reverse operation’s result. Atthis point, the algorithm utilizes that the relevant part of the inverse image ofz consists of individual lines which are parallel to the x axis. The idea behindthe algorithm is illustrated graphically.

References:

[1] M. Nehmeier, J. Wolff von Gudenberg, Interval comparisons andlattice operations based on the interval overlapping relation, In Proceed-ings of the World Conference on Soft Computing 2011 (WConSC’11), SanFrancisco, CA, USA, 2011.

[2] A. Neumaier, Vienna proposal for interval standardization, Final ver-sion, December 19, 2008, www.mat.univie.ac.at/ neum/ms/1788.pdf.

59

New directionsin interval linear programming

Milan Hladık

Charles University, Faculty of Mathematics and PhysicsMalostranske nam. 25

118 00, Prague, Czech RepublicUniversity of Economics, Faculty of Informatics and Statistics

nam. W. Churchilla 413067, Prague, Czech [emailprotected]

Keywords: linear programming, interval equations, interval inequalities

Linear programming is undoubtedly one of the most frequently used tech-niques in problem solving. Since real life data are often not known precisely dueto measurement errors, estimations and other kinds of uncertainties, we have toreflect it in the theory of linear programming. Modeling such uncertainties byintervals gives rise to the research area called interval linear programming [1,2].Herein, we suppose that interval domains of uncertain quantities are a priorigiven, and the aim is to calculate verified results giving rigorous enclosures (orother types of answers) valid for all possible realizations of interval data.

There are many problems regarding interval linear programming, such asverifying feasibility, (un)boundedness or optimality for some or for all realiza-tions of interval quantities; some of them are polynomially solvable, but theothers are NP-hard. However, there are two main directions of determining(enclosing) the optimal value range and the optimal solution set. While theformer was intensively studied in the past and many results concerning com-putational complexity and methods are available, there is still lack of theoryand practical methods for the latter. Rigorously and tightly enclosing the op-timal solution set is the most challenging problem in this subject. Traditionalapproaches were based on the so called basis stability, meaning that there is abasis being optimal for each realization of intervals. Under basis stability, theproblems can be solved very efficiently. Checking this property may be com-putationally expensive in general, but strong sufficient conditions exist. Theproblem is, however, that in many situations (e.g. under basis degeneracy), theproblem is not basis stable even for tiny intervals. Overcoming the non-basisstability is remaining to be an important but difficult problem.

60

In the talk, we survey the known results and present recent developmentsas well. We focus on the computational complexity, methods and other aspectsof enclosing the optimal value range and the optimal solution set. We discussapplications of this technique in diverse areas. Besides many real-world opti-mization problems (in economics, environmental management, logistics, . . . ),interval linear programming may also serve as a supporting tool for linear re-laxation in constraint programming and global optimization, in matrix gameswith inexact data or in statistics in linear regression on uncertain data by usingL1 or L∞ norm. Sensitivity analysis, frequently used in economical operationsresearch, can be extended from the traditional one-parameter case to the casewith multiple parameters situated in diverse positions. Eventually, we mentionsome open problems and challenges for the future research.

References:

[1] M. Fiedler, J. Nedoma, J. Ramık, J. Rohn, and K. Zimmermann,Linear Optimization Problems with Inexact Data, Springer, New York,2006.

[2] M. Hladık, Interval Linear Programming: A Survey, in Z.A. Mann, ed-itor, Linear Programming — New Frontiers in Theory and Applications,chapter 2, pages 85–120, Nova Science Publishers, New York, 2012.

61

Computing enclosures of

overdetermined interval linear systems

Jaroslav Horacek1,2 and Milan Hladık1,2

1 Charles University, Faculty of Mathematics and PhysicsMalostranske nam. 25, 118 00, Prague, Czech Republic

2 University of Economics, Faculty of Informatics and Statisticsnam. W. Churchilla 4, 13067, Prague, Czech Republic

[emailprotected], [emailprotected]

Keywords: interval linear systems, enclosure methods, overdetermined sys-tems

Real-life problems can be described by different means: difference and dif-ferential equations, linear and nonlinear systems, etc. The various descriptionscan be often transformed to each other using only linear equalities (or inequal-ities). That is why interval linear systems are still in the focus of researchers.By an interval linear system, we mean a system Ax = b, where A is an intervalm×n matrix and b is an m−dimensional interval vector. We will now considera special class of these systems called overdetermined systems. They are thesystems for which m > n holds. Simply said, they have more equations thanvariables.

When we talk about interval linear systems, it is necessary to mention, whatwe mean by the solution of these systems. The solution set Σ of an interval linearsystem Ax = b is an accumulation of all solutions of all instances of this intervalsystem. We get an instance of an interval system, when we independently pickthe values from all interval coefficients of the system thus obtaining a point realsystem. In other words,

Σ = x | Ax = b for some A ∈ A, b ∈ b.

In what follows, we consider Σ, not the least squares or other approximation ofthe solution set. If no instance of the interval system has a solution, we call thissystem unsolvable. We are interested in the tightest possible n-dimensional box(aligned with axes) that encloses the solution set of an interval system. It is alsocalled interval hull of the solution set. Finding it is an NP-hard problem, so itis often sufficient to find an as narrow as possible n-dimensional box containingthe hull that is called an interval enclosure of the solution set.

62

Square systems (those for which m = n holds) can possess some advanta-geous properties. Their matrix A can be diagonally dominant, positive definite,M-matrix and many more. And we know that our algorithms behave well inthese cases. Unfortunately, overdetermined systems do not posses any of theseproperties. That is why it is sometimes more difficult to solve these systems.However, we can use some earlier designed numerical methods and adapt themto be suitable for computing with intervals.

Here we would like to present the summary of the methods applicable to theoverdetermined interval linear systems. They are Gaussian elimination, classicaliterative methods, the Rohn method, supersquare and subsquare methods orlinear programming.

After introducing each method, we would like to talk about the comparisonof all the mentioned methods based on extensive numerical testing for randommatrices. We also would like to point out discovered properties of the methods.Some methods fail if the radii of interval coefficients of a system exceed somelimits. Some of them, despite their simplicity (supersquare and subsquare meth-ods) return surprisingly narrow results. Another important property of somemethods (Gaussian elimination, subsquare methods) is that they can quicklydetermine, whether the system is unsolvable. This can be useful in applications(system validation, technical computing) where we do care if the systems aresolvable or unsolvable.

[1] E.R. Hansen, G.W. Walster, Solving overdetermined systems of in-terval linear equations, Reliable Computing, 12(2006), No. 3, pp.239–243.

[2] J. Horacek, Overdetermined systems of interval linear equations, masterthesis (in Czech), Charles University in Prague, Department of AppliedMathematics, Prague, 2011.

[3] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to IntervalAnalysis, SIAM, Philadelphia, 2009.

[4] A. Neumaier, Interval Methods for Systems of Equations, CambridgeUniversity Press, Cambridge, 1990.

[5] J. Rohn, Enclosing solutions of overdetermined systems of linear intervalequations, Reliable Computing, 2(1996), No. 2, pp.167–171.

63

Sardana: an automatic tool

for numerical accuracy optimization

Arnault Ioualalen1,2,3, Matthieu Martel1,2,3

1Univ. Perpignan Via Domitia, Digits, Architectures etLogiciels Informatiques, F-66860, Perpignan, France

2Univ. Montpellier II, Laboratoire d’Informatique Robotique et deMicroelectronique de Montpellier, UMR 5506, F-34095, Montpellier, France3CNRS, Laboratoire d’Informatique Robotique et de Microelectronique de

Montpellier, UMR 5506, F-34095, Montpellier, [emailprotected], [emailprotected]

Keywords: numerical accuracy, abstract interpretation, code synthesis.

On computers real numbers are approximated by floating-point numbers de-fined by the IEEE754 formats [1]. For most computations these formats areprecise enough even though the induced approximation errors inherently. How-ever in some cases the accuracy of the calculation is critical and the user needsto certify that his program will always yield an accurate enough output for everyvalid input. As it is impossible to check the validity of a calculation for everyinputs, static analyzers such as Fluctuat [4] or Astree [2] rely on an interval orrelational representation of the inputs, combined to abstract interpretation.Sardana is a tool designed to automatically rewrite numerical computations per-formed in floating-point arithmetics in order to optimize their accuracy. Sardanaworks directly on the source code of a LUSTRE program such as the ones usedin real avionic software and produces a new source code as well as an absolutebound of error which is less than the original one. To achieve this Sardana uses:(i) Interval analysis, to handle large sets of inputs and not only isolated traces,(ii) A novel intermediate representation of program called APEG [5] which al-lows us to manipulate many transformed versions of the initial program in acompact way, (iii) A local search heuristic [5] to synthesize from an APEG anew version of the program, (iv) And abstract interpretation [3] to enforce thevalidity of our analysis of the accuracy.

The first challenge is how to transform a program into a more accurate one.As there is an exponential number of ways to write an arithmetic expression(e.g. a simple sum of n terms), we cannot exhaustively generate all possibletransformations. This problem is closely related to the phase ordering problemof compilers. We use abstract interpretation to narrow down this search space

64

while allowing to represent in an abstract way as many transformed versionsas possible of the initial program. Our structure of Abstract Program Expres-sion Graph (APEG) is built from the syntactic tree of the source code, and isa compact and efficient way to handle multiple versions of a program withoutduplication and exponential growth of the structure. As there are many trans-formations which involve only the same part of the program, APEGs merge themlocally into one equivalence class without duplicating the rest of the structure.Also, we introduce the concept of abstraction boxes into APEGs, which are de-fined by an operator and a set of sub-expressions. Each abstraction box allowsto represent the exponential number of expressions that can be synthesized withthe given operator over the set of sub-expressions of the box.

Next, Sardana has to extract from an APEG a program which has a betternumerical accuracy. We use a limited depth search strategy with memoriza-tion. Intuitively we select the best way to evaluate an expression by consideringonly the best way to evaluate its sub-expressions. To accurately calculate bothrounding errors and floating-point values, Sardana uses the GMP and MPFRlibraries. Sardana is also able to manipulate any floating-point IEE754 formatand fixed-point arithmetic.Several experimental results have been obtained on various benchmarks andreal-case applications, such as: summation (results are 50% closer to the realvalues), polynomial functions like Taylor expansions (20% to 30% more ac-curate), and real avionic codes (10% more accurate half the time). Finally,Sardana provides a graphical interface allowing the user to specify the analyzerparameters easily and analyze the results in a user friendly way.

References:

[1] ANSI/IEEE, IEEE Standard for Binary Floating-point Arithmetic, Std 754-2008 edition, 2008.

[2] J. Bertrane, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Mine,and X. Rival, Static analysis and verification of aerospace software by abstractinterpretation, AIAA Infotech@Aerospace, 2010.

[3] P. Cousot and R. Cousot, Abstract Interpretation: A unified lattice modelfor static analysis of programs by construction of approximations of fixed points,Principles of Programming Languages, pp. 238–252, 1977.

[4] D. Delmas, E. Goubault, S. Putot, J. Souyris , K. Tekkal , and F.Vedrine, Towards an industrial use of FLUCTUAT on safety-critical avionicssoftware, Formal Methods for Industrial Critical Systems (FMICS), 2009.

[5] A. Ioualalen and M. Martel, A new abstract domain for the representationof mathematically equivalent expressions, Static Analysis Symposium (SAS),2012.

65

Interval analysis and robotics

Luc Jaulin

Labsticc, IHSEV, ENSTA-Bretagne2, rue Francois Verny,29200, Brest, France

[emailprotected]

Keywords: interval arithmetic, contractors, robotics

When dealing with complex mobile robots, we often have to solve a huge setof nonlinear equations. They may be related to some measurements collectedby sensors, to some prior knowledge on the environment or to the differentialequations describing the evolution of the robot. For a large class of robots theseequations are uncertain, enclose many unknown variables, are strongly nonlinearand should be solved very quickly. Hopefully, the number of these equations isgenerally much larger than the number of variables. We can assume that thesystem to be solved has the following form

fi (x, yi) = 0,x ∈ Rn, yi ∈ [yi] ⊂ Rpi ,

i ∈ 1, . . . ,m .(1)

The vector x ∈ Rn is the vector of unknown variables, the vector yi ∈ Rpi isthe ith data vector (which is approximately known) and fi : Rn × Rpi → R isthe ith function. The box [yi] is a small box of Rn that takes into account someuncertainties on yi. Here, we assume that the number of equations m is muchlarger that the number of unknown variables n (otherwise, the method will notbe able to provide accurate results). Typically, we could have n = 1000 andm = 10000. In order to provide a fast polynomial algorithm able to find a box[x] that encloses all feasible x, we shall associate, to each equation fi (x, yi) = 0,a contractor Ci : IRn → IR

n that narrows the box [x] without removing anyvalue for x consistent with the ith equation. Such a contractor can be obtainedusing interval computations [1]. Then we iterate each contractor until no morecontraction can be performed. An illustration of the procedure is Figure 1,where the sequence of contractors C1, C2, C3, C1, C2, C3 . . . is applied. Notethat the first contractor C1 was able to contract the initial box [x] = [−∞,∞]

2

to the box containing the thick circle.As an example, we shall consider the SLAM (Simultaneous localization and

map building) problem asking whether it is possible for an autonomous robot

66

Figure 1: Illustration of the propagation process

to move in an unknown environment and build a map of this environment whilesimultaneously using this map to compute its location. It is shown in [2] thatthe general SLAM problem can be cast into the form (1). The correspondingsystem is strongly nonlinear and classical non-interval methods cannot to dealwith this type of problems in a reliable way. The efficiency of the approachwill be illustrated on a two-hour experiment where an actual underwater robotis involved. This four-meter long robot build by the GESMA (Groupe d’etudesous-marine de l’Atlantique) is equipped with many sensors (such as sonars,Loch-Doppler, gyrometers, ...) which provide the data. The algorithm is ableto provide an accurate envelope for the trajectory of the robot and to computesets which contain some detected mines in less than one minute. Other examplesinvolving underwater robots and sailboat robots will also be presented.

References:

[1] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to IntervalAnalysis, SIAM, Philadelphia, 2009.

[2] L. Jaulin, A nonlinear set-membership approach for the localization andmap building of an underwater robot using interval constraint propaga-tion, IEEE Transaction on Robotics, 25 (2009), No. 1, pp. 88–98.

67

Using interval branch-and-prune

algorithm for lightning protection

systems design

Maksim Karpov

Department of Computer Software, Ivanovo State Power University34, Rabfakovskaya st.

153003 Ivanovo, [emailprotected]

Keywords: interval analysis, enclosure methods, lightning protection

The external lightning protection system (LPS) is intended to intercept di-rect lightning flashes to a structure, including flashes to the side of the structure.The probability of structure penetration by a lightning current is considerablydecreased by the presence of a properly designed air-termination system. Itinteracts with lightning. The form of protection zones and protection of objectsand structures depend on its configuration. In the design practice, horizontalsections of protection zone, made at a certain height (usually the highest build-ing is used), are commonly used for checking the safety of objects. The facility isconsidered to be protected if it is totally covered with these sections. Otherwise,it is necessary to determine the total unprotected area. Knowing the contoursat several levels makes it possible to check whether a structure of complicatedform is completely inside the protected volume.

Such section is constructed as a group of protection zones sections, which areformed by individual rods as well as by their interactions (pair, triple, multiple).The shape of the section is described by linear and nonlinear constraints. It de-pends on the applied model of lightning attraction to ground objects. Geomet-rically, there is a collection of planar closed objects with complicated form. Theboundary of the region (outer boundaries and holes) consists of end-connectedcurves where each point shares only two edges.

Usually sections are based on the geometric modeling kernel, which incor-porates low-level data structures and algorithms to support mixed-dimensionalgeometric modeling. For converting objects that enclose an area into a region,we use Boolean operations and algorithms selecting closed contours. This op-erations take a long time to complete. Furthermore, we often have to deal witherrors in constructions on the kernel side.

68

We propose a method for computing inner and outer approximations ofunprotected region by interval pavings. In this paper we shall consider a coveringmethod that provides a tight piecewise linear interval enclosure of the region.The method is based on the branch-and-prune algorithm suggested in [2], andit generates a covering of the solution set by a collection of smaller and smallerboxes which give increasingly accurate information about the location of theboundary of the region. The proposed new method can be used to speed upgeometric computations for lightning protection systems design.

Further details will be considered too, namely, the possibility (and usability)of preconditioning for improving the result and performance of the subdivisionalgorithm as well as the possibility of parallelizing the method.

We are going to present and discuss numerical results produced by our tech-nique.

References:

[1] IEC 62305-3 Protection against lightning, Part 3: Physical damages andlife hazard in structures, International Electrotechnical Commission, 2006.

[2] A. Neumaier, The enclosure of solutions of parameter-dependent systemsof equations, In Reliability in Computing, (ed. by R.E. Moore), Acad.Press, San Diego, 1988, pp. 269–286.

69

An algorithm to reduce the number

of dummy variables in affine arithmetic

Masahide Kashiwagi

Faculty of Science and Engineering3-4-1 Ookubo, Shinjuku-ku

Tokyo 169-8555, [emailprotected]

Keywords: affine arithmetic

Affine arithmetic (AA) is an extension of interval arithmetic. In AA, quan-tities are represented by affine forms:

a0 + a1ε1 + a2ε2 + · · ·+ anεn

where εi are dummy variables which satisfy −1 ≤ εi ≤ 1 and express therelation between quantities written in affine form. In AA, number of ε graduallyincreases and that makes calculation slower.

In this paper, we propose an algorithm to reduce the number of εs .Note that we should apply the algorithm to as many affine variables on

memory as possible simultaneously. Application to small affine variables is noteffective.

Consider p affine variables which have q dummy εs:

a10 + a11ε1 + · · ·+ a1qεq

a20 + a21ε1 + · · ·+ a2qεq

...

ap0 + ap1ε1 + · · ·+ apqεq

we can reduce the number of ε by ‘intervalize’ several εs. Let S be a index setof εs which we want to erase, we can erase εs by substituting as follows:

∑

i∈S

a1i → (∑

i∈S

|a1i|)εq+1

...∑

i∈S

api → (∑

i∈S

|api|)εq+p

70

Here, p number of new εs are added in order to represent the generated interval.In the following, we consider to reduce number of εs to r . We select q−(r−p)

εs which have small ‘intervalize penalty’ and intervalize these εs, then we canreduce the number of εs to (r − p) + p = r :

∑i6∈S a1i + (

∑i∈S |a1i|)εq+1

...∑

i6∈S api + (∑

i∈S |api|)εq+p

Now, we will show how to select εs whose ‘intervalize penalty’ are small. Letvectors v1, v2, · · · , vq ∈ Rp be vi = (ai1, · · · , aip)T .

Definition 1 (Penalty Function) For vector v = (a1, · · · , ap)T we definepenalty function P as follows:

• When a1 = a2 = · · · = ap = 0, we define P (v) = 0.

• Otherwise, let as, at be the first and second values in the order of absolutevalues |ai|. That is, |as| ≥ |at| ≥ |ai| (i 6= s, t) hold. Then we define

P (v) =|as| · |at||as|+ |at|

.

We should choose q − (r − p) number of ε in ascending order of the valueP (vi) . Note that the penalty function has the following property.

Theorem 1 (Property of Penalty function) Let v = (a1, · · · , ap)T ∈ Rp

and norm of Rp be maximum norm. Let L ⊂ Rp be a line segment defined by

(a1 · · ·ap)T ε (−1 ≤ ε ≤ 1)

and let B ⊂ Rp be a hyper-rectangular defined by

(a1ε1, · · · , apεp)T (−1 ≤ εi ≤ 1).

Then Housedorff distance between L and B becomes H(L,B) = 2P (v).

That is, P (v) is the maximum distance between L (a line segment definedby original v) and B (hyper-rectangular generated by intervalization of L). Wecan say that the smaller P (v) is, the smaller increase of range by intervalization.

References:

[1] M.V.A. Andrade, J.L.D. Comba and J. Stolfi, Affine arithmetic,INTERVAL’94, St. Petersburg (Russia), March 7-10, 1994.

71

Uniform second-order polynomial-time

computable operators and data

structures for real analytic functions

Akitoshi Kawamura1, Norbert Muller2,Carsten Rosnick3, Martin Ziegler3

1Tokyo University and 2Universitat Trier and 3TU Darmstadt

Keywords: computable analysis, complexity theory, analytic functions

Recursive Analysis is the theory of real computation by approximation up toguaranteed prescribable absolute error. Initiated by Alan Turing, it formal-izes verified numerics in unbounded precision [1,7] in the common frameworkof the Theory of Computation [2]. More precisely, a function f : [0, 1] → R

is called computable iff a Turing machine can, upon input of every sequenceam ∈ Z with |x−am/2m+1| ≤ 2−m for x ∈ [0, 1], output a sequence bn ∈ Z with|f(x)− bn/2n+1| ≤ 2−n. Any such f is necessarily continuous. More generally,the Type-2 Theory of Effectivity [9] studies, compares, and combines so-calledrepresentations, that is, encodings for separable metric spaces like C[0, 1]. Refin-ing mere computability, real complexity theory investigates the running time interms of the output precision n; see, e.g., [5] and the references therein. Asymp-totic growth, polynomial in n, is generally considered practical. Concerningoperators and functionals on C[0, 1], recall the following strong, nonuniformlower bounds relative to the millennium problem and its strengthenings:• Max(f) :=

([0, 1] ∋ x 7→ maxf(y) : 0 ≤ y ≤ x

)∈ C[0, 1] is polynomial-

time computable for every polynomial-time computable f ∈ C[0, 1] iff P =NP ; cmp. [5, Theorem 3.7].

•∫f :=

([0, 1] ∋ x 7→

∫ x

0f(y) dy

)∈ C[0, 1] is polynomial-time computable for

every polynomial-time computable f ∈ C[0, 1] iff FP = #P ; cmp. [5,Theorem 5.33].

• The (unique local) solution u() =: Solve(f) to the ordinary differential equa-tion u′(t) = f

(t, u(t)

), u(0) = 0, is polynomial-time computable for every

polynomial-time computable Lipschitz -continuous f iff P = PSPACE ;cmp. [3, Theorem 3.2].

Restricted to functions f : [0, 1] → R analytic on some complex open neigh-bourhood of [0, 1] on the other hand, the above operators Max,

∫, Solve have

been shown to map polynomial-time computable arguments to polynomial-time

72

computable values; cmp. [6] and the references therein. However these resultsare nonuniform, too, in referring to the dependence on n only while regardingarguments f as arbitrary but fixed. We strengthen the latter by presenting andanalyzing algorithms receiving both n and f as inputs. More precisely, considerthe following three data structures representing a real analytic f : [0, 1]→ R:

α: As a finite list(M, (xm), (am,j), (Lm), (Am)

)of dyadic centers xm ∈ D∩[0, 1]

(1 ≤ m ≤ M), binary integer bounds Am, and inverse radii Lm ∈ N

in unary such that the intervals[xm − 1/(4Lm), xm + 1/(4Lm)

]cover

[0, 1], together with (programs computing the) power series coefficientsam,j = f (j)(xm)/j! of f around xm satisfying |am,j| ≤ Am · Lj

m.

β: A program computing f , together with an integer L in unary such that f iscomplex analytic even on (an open neighbourhood of) the closed rectangleRL := x + iy | − 1

L ≤ y ≤ 1L ,− 1

L ≤ x ≤ 1 + 1L and a binary integer

upper bound B to |f | on said RL.

γ: A program evaluating f |D, together with integers A (in binary) and K (inunary) such that |f (j)(x)| ≤ A ·Kj · j! holds for all 0 ≤ x ≤ 1.

We prove them mutually second-order [4] polynomial-time equivalent; and wedevise second-order polynomial-time algorithms on them for i) evaluation, ii) ad-dition, iii) multiplication, iv) differentiation, v) integration, and vi) maximiza-tion. These may help to improve the mere computability of Bloch’s Constant[8] to an algorithm actually calculating some new digits of it.

References:

[1] F. Bornemann, D. Laurie, S. Wagon, J. Waldvogel, The SIAM100-Digit Challenge: A Study in High-Accuracy Numerical Computing,2004.

[2] M. Braverman, S.A. Cook, Computing over the reals: foundations forscientific computing, Notices of the AMS, 53 (2006), No. 3, pp. 318–329.

[3] A. Kawamura, Lipschitz continuous ordinary differential equations arepolynomial-space complete, Computational Complexity, 19 (2010), No. 2,pp. 305–332.

[4] A. Kawamura, S.A. Cook, Complexity theory for operators in analysis,Proc. 42nd Ann. ACM Symp. on Theory of Computing, pp. 495–502; fullversion to appear in ACM Transactions in Computation Theory.

[5] K.-I. Ko, Computational Complexity of Real Functions, Birkhauser, 1991.

73

[6] N.T. Muller, Constructive aspects of analytic functions, Informatik-Berichte FernUniversitat Hagen, 190 (1995), pp. 105–114.

[7] N.T. Muller, The iRRAM: exact arithmetic in C++”, Springer LNCS,2064 (2001), pp. 222–252.

[8] R. Rettinger, Bloch’s constant is computable, Journal of UniversalComputer Science, 14 (2008), No. 6, pp. 896–907.

[9] K. Weihrauch, Computable Analysis, Springer, 2000.

On rigorous upper bounds

to a global optimum

Ralph Baker Kearfott

Department of MathematicsUniversity of Louisiana at Lafayette

U.L. Box 4-1010Lafayette, LA 70504-1010 USA

[emailprotected]

Keywords: global optimization, verified bounds, local optimizers

In mathematically rigorous complete search in global optimization, a sharpupper bound on the global optimum is important for the overall efficiency of thebranch and bound process. Local optimizers, using floating point arithmetic,often compute a point close to an actual global optimizer. However, particu-larly with many equality constraints or active inequality constraints, methodsfor using this approximate local optimizer to obtain a mathematically rigorousupper bound on the global optimum fail. On the other hand, there are varioussuch techniques. Several of these are:

Verify feasibility of a reduced system: We identify equality constraintsand active inequality constraints. Provided the total number m of suchconstraints is less than the number of variables n, we identify a subspaceof dimension m in which the m values of the m constraints are sensitive,then apply an m-dimensional interval Newton method within this sub-space to prove existence of a feasible point within a small box. This is thetechnique espoused in [2] or [1], [§5.2.4].

74

Use the Kuhn–Tucker or Fritz John conditions: Perform an intervalNewton method in the m1 + m2 + n + 1-dimensional space defined bythe Fritz John conditions (variables and multipliers), where m1 and m2

are the numbers of equality and inequality constraints. This can prove ex-istence of a critical point within a small box surrounding an approximateoptimum.

Relax the equality constraints to inequality constraints: This is the ap-proach followed, say, in [3]. Although a slightly different problem is beingsolved, a point strictly interior to the feasible region can be found, and asimple interval evaluation at that point can be used to verify feasibility.

The preceding verification techniques all must start with a point that is approx-imately feasible or approximately optimal; the technique is then either applieddirectly to that point or a small box is constructed around the point, withinwhich feasibility can be verified. Some ways of obtaining such a point are:

Use a local (floating point) optimizer (such as IPOPT [4]);

Use a generalized Newton method to project onto the feasible set (that is,apply Newton’s method with the pseudo-inverse of the Jacobian matrixof the constraints);

Use specialized techniques to project onto or into the feasible set, startingwith an approximate feasible point or approximate optimizing point.

We present our experience and summarize the advantages and pitfalls ofeach of these techniques.

References:

[1] R. B. Kearfott, Rigorous Global Search: Continuous Problems (Non-convex optimization and its applications, Vol. 13), Kluwer Academic Pub-lishers, Norwell, MA, USA, and Dordrecht, The Netherlands, 1996.

[2] R. B. Kearfott, On proving existence of feasible points in equalityconstrained optimization problems, Math. Program., 83 (1998), No. 1,pp. 89–100.

[3] J. Ninin, Optimisation Globale Basee sur l’Analyse d’Intervalles: Relax-ations Affines et Techniques d’Acceleration. Ph.D. dissertation, Universitede Toulouse, Toulouse, France, December 2010.

[4] A. Wachter, https://projects.coin-or.org/Ipopt (homepage ofIPOPT).

75

Bounding optimal value function

in linear programming

under interval uncertainty

Oleg V. Khamisov

Institute of Energy Systems SD RAS130, Lermontov str.

644033 Irkutsk, [emailprotected]

Keywords: parametric linear programming, optimal value function, convexand concave support functions

We consider the optimal value function of parametric linear programmingproblem

f(c, A, b) = mincTx : Ax ≤ b, x ≤ x ≤ xwhere c, x, x ∈ Rn, A is m× n matrix, b ∈ Rm. We assume that coefficients ofc, A and b vary within the prescribed intervals

cj ≤ cj ≤ cj , j = 1, . . . , n,

aij ≤ aij ≤ aij , i = 1, . . . ,m, j = 1, . . . , n,

bi ≤ bi ≤ bi, i = 1 . . . ,m,

x and x are fixed. Optimal value function f(c, A, b) is in general nonsmoothand nonconvex. The problem is to find bounds f and f such that

f ≤ f(c, A, b) ≤ f.

To do this we consider auxiliary problems of minimizing and maximizing f(c, A, b).A branch and bound type global optimization approach is suggested. It is basedon concepts of convex and concave support functions [1]. Illustrative numericalexamples are presented.

References:

[1] O.V. Khamisov, On application of convex and concave support functionsin nonconvex problems, Manuscript of Institute of Operations Research,University of Zurich, (1998), 16 p.

76

An environment for verified modeling

and simulation of solid oxide fuel cells

Stefan Kiel1, Ekaterina Auer1, and Andreas Rauh2

1Faculty of Engineering, INKOUniversity of Duisburg-EssenD-47048 Duisburg, Germany

kiel, [emailprotected] of MechatronicsUniversity of Rostock

D-18059 Rostock, [emailprotected]

Keywords: global optimization, parallelization, GPU, SOFC, UniVer-MeC

A major goal of a current joint project between the Universities of Rostockand Duisburg-Essen is to design and develop robust and accurate control strate-gies for solid oxide fuel cells (SOFCs). For this purpose, system models basedon ordinary differential equations (ODEs) are being developed [2]. Unlike moststate-of-the-art models, they can be used to devise control laws for SOFCs whichare valid not only for fixed but also for nonstationary operating points. To al-low users to employ the new models and techniques easily in combination withdifferent verified tools, we implement the environment VeriCell. It featuresan intuitive graphic interface for construction of SOFC models from predefinedbuilding blocks and is based on the framework UniVerMeC [1] which providesa unified access to various verified arithmetics and algorithms. New SOFCcomponent models can be added to VeriCell as they are being developed, forwhich purpose a plug-in based interface is adopted.

In this talk, we present the environment with the focus on efficient imple-mentation of verified optimization routines for parameter identification in SOFCsystems. The task is to minimize a quadratic cost function which contains thesolution to the initial value problem (IVP) for the above-mentioned ODEs asone of its constituent parts. At the moment, the exact solution to the IVP isapproximated by the explicit Euler method [3]. The cost function is complex inpractice since it is composed of many summands (the number of which is propor-tional to the number of measurements) and is strongly influenced by cancelation.

77

The ODE-based model takes into account preheated air and fuel gas suppliedto the SOFC system as well as the corresponding reaction enthalpies. The pa-rameters of interest describe the thermal resistances of the stack materials, thedependency of the heat capacities on the temperature, and the heat producedduring the exothermic electrochemical reactions in each individual fuel cell.

Important aspects in solving this task are to increase the model accuracyand to reduce computing times. In the first case, the use of verified IVP solverssuch as VNODE-LP instead of the Euler approximation is necessary. In thesecond case, the employment of the GPU might be profitable, along with theordinary multi-kernel parallelization. In this talk, we show what steps are nec-essary to be able to identify parameters of SOFC models of different dimensionsusing parallelization techniques and the GPU, highlighting in the latter case thequestions of accurate implementation, efficient memory use, and correct choiceof the working precision. These issues are demonstrated on examples modeledand simulated in VeriCell, which gives an overview of its main features.

References

[1] E. Dyllong and S. Kiel, A comparison of verified distance computationbetween implicit objects using different arithmetics for range enclosure,Computing, 2011.

[2] A. Rauh, T. Dotschel, and H. Aschemann, Experimental parameter iden-tification for a control-oriented model of the thermal behavior of high-temperature fuel cells, In CD-Proc. of IEEE Intl. Conference MMAR2011, Miedzyzdroje, Poland, 2011.

[3] A. Rauh, T. Dotschel, E. Auer and H. Aschemann, Interval methods forcontrol-oriented modeling of the thermal behavior of high-temperaturefuel cell stacks, In Proc. of SysID 2012 (accepted).

78

Use of Grothendieck’s inequality

in interval computations:

quadratic terms are estimated

accurately modulo a constant factor

Olga Kosheleva and Vladik Kreinovich

University of Texas at El Paso, El Paso, TX 79968, [emailprotected], [emailprotected]

Keywords: enclosure methods, Grothendieck inequality, feasible algorithms

In interval computations, one of the most widely used methods of efficientlycomputing an enclosure Y the range y = f(x1, . . . ,xn) of a given functionf(x1, . . . , xn) on a given box x = x1× . . .×xn is the Mean Value (MV) method:

Y = f(x1, . . . , xn) +n∑

i=1

∂f

∂xi(x) · [−∆i,∆i], where xi is a midpoint of the i-th

interval, ∆i is its radius, and the ranges of the derivatives f,idef=

∂f

∂xican be

estimated, e.g., by using straightforward interval computations; see, e.g., [5].

This method has excess width O(∆2), where ∆def= max ∆i.

Can we come up with more accurate enclosures? We cannot get too drastican improvement, since even for quadratic functions f(x1 . . . , xn), computingthe interval range is NP-hard (see, e.g., [4,7]) – and therefore (unless P=NP), afeasible algorithm with excess width O(∆2+ε) is impossible. What we can do istry to decrease the overestimation of the quadratic term. It turns out that sucha possibility follows from an inequality proven by A. Grothendieck in 1953 [2].

Specifically, the MV method is based on the 1st order Mean Value Theorem(MVT): f(x + ∆x) = f(x) +

∑f,i(x + η) · ∆xi for some ηi ∈ [−∆i,∆i] [3].

Instead, we propose to estimate the range by adding estimates for ranges oflinear, quadratic, and cubic terms in the 3rd order MVT: f(x + ∆x) = f(x) +∑f,i(x)·∆xi+

∑f,ij(x)·∆xi ·∆xj+

∑f,ijk(x+η)·∆i ·∆j ·∆k. The range of the

cubic term is estimated via straightforward interval computations; the resultingestimate is of order O(∆3). The range of the linear term f(x)+

∑f,i(x)·∆xi can

be explicitly described as [y−∆, y+∆], where ydef= f(x) and ∆ =

∑ |f,i(x)|·∆i.So, the remaining problem is: how accurately can we find the range [−Q,Q]

79

of the quadratic termn∑

i,j=1

aij · ∆xi · ∆xj (where aijdef= f,ij(x)), on the box

[−∆1,∆1]× . . .× [−∆n,∆n].By re-scaling, we conclude that Q is equal to the maximum of the function

B(z)def=

n∑i,j=1

bij · zi · zj (where bijdef= aij · ∆i · ∆j), over values zi ∈ [−1, 1].

Grothendieck’s inequality enables us to estimate the maximum Q′ of the related

bilinear function b(z, t)def=

n∑i,j=1

bij · zi · tj when zi, tj ∈ −1, 1: namely, we can

feasibly compute Q′′ for which K−1G ·Q′′ ≤ Q′ ≤ Q′′, where KG ∈ [1, 1.782] (see,

e.g., [1,6]). One can easily see that Q′ is equal to the maximum of b(z, t) whenzi, tj ∈ [−1, 1]. Since B(z) = b(z, z), we have, Q ≤ Q′; on the other hand, sinceb(z, t) = B((z + t)/2)− B((z − t)/2), we have Q′ ≤ 2Q. Thus, Q′/2 ≤ Q ≤ Q′

and so,Q′′

2KG≤ Q ≤ Q′′.

Hence, by computing Q′′, we can feasibly estimate the quadratic term Qaccurately modulo a small constant factor 2KG ≤ 3.6.

References:

[1] N. Alon, A. Naor, Approximating the cut-norm via Grothendieck’sinequality, SIAM J. Comp., 35 (2006), No. 4, pp. 787–803.

[2] A. Grothendieck, Resume de la theorie metrique des produits tensorielstopologiques, Boll. Soc. Mat. Sao-Paulo, 8 (1953), pp. 1–79.

[3] O. Kosheleva, How to explain usefulness of different results when teach-ing calculus: example of the Mean Value Theorem, J. Uncertain Systems,7 (2013), to appear.

[4] V. Kreinovich, A. Lakeyev, J. Rohn, P. Kahl, Computational Com-plexity and Feasibility of Data Processing and Interval Computations,Kluwer, Dordrecht, 1998.

[5] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to IntervalAnalysis, SIAM, Philadelphia, 2009.

[6] G. Pisier, Grothendieck’s theorem, past and present, Bulletin of theAmerican Mathematical Society, 49 (2012), No. 2, pp. 237–323.

[7] S.A. Vavasis, Nonlinear Optimization: Complexity Issues, Oxford Uni-versity Press, N.Y., 1991.

80

On boundedness and unboundedness

of polyhedral estimates

for reachable sets of linear systems

Elena K. Kostousova

Institute of Mathematics and Mechanics,Ural Branch of the Russian Academy of Sciences,

16, S. Kovalevskaja street, 620990, Ekaterinburg, [emailprotected]

Keywords: interval analysis, linear differential systems, reachable sets,polyhedral estimates, parallelepipeds

The problem of constructing trajectory tubes (in particular, reachable tubeswhich describe a dynamic of reachable sets) is an essential theme in controltheory [1]. Since the practical construction of these tubes may be cumbersome,the different numerical methods are devised for this cause. Among them thetechniques were developed for estimating reachable sets by domains of somefixed shape such as ellipsoids, parallelepipeds, zonotopes. In particular, box-valued estimates may be constructed by means of interval calculations. Butsuch estimates can turn out to be rather conservative and even unbounded dueto the wrapping effect [2,3] known in interval analysis. To make possible exactrepresentations of reach sets A.B. Kurzhanski proposed to use families of fixedshape estimates [1,4] and, moreover, families of so called tight estimates [4]. Weexpanded this approach to polyhedral (parallelepiped-valued) estimates. Thefamily P of outer polyhedral estimates of reachable sets for linear differentialsystems with parallelepiped-valued uncertainties in initial states and additive in-puts may be introduced [5]. These estimates are determined by a given dynamicsof orientation matrices P (t) ∈ Rn×n (this function is the parameter of the fam-ily) and by corresponding parameterized differential equations which describethe dynamics of centers and “semi-axis” values of parallelepipeds. Consideringdifferent types of the orientation matrix dynamics P (·) we obtain several sub-families Pi ∈ P of the estimates with different properties, in particular, subfam-ilies P3 and P1 of tight and touching [6] (tight in n specific directions) estimates(both ensure the exact representations of reachable sets through intersectionsof their units). Box-valued estimates may be attributed to the subfamily P2

of estimates with constant orientation matrices. In fact, the orientation matrixV = P (0) at the initial time is the parameter of all mentioned subfamilies Pi.

81

The paper presents our recent results on studying the properties of bound-edness and unboundedness at the infinite time interval of outer polyhedral esti-mates for reachable sets of systems with stable matrices. The mentioned proper-ties are determined by interaction of three factors: the matrix V , the real Jordanmatrix for system’s matrix and the properties of the bounding sets for uncer-tainties. The results of this interaction are different for different subfamilies Pi.We formulate the corresponding criteria for boundedness / unboundedness ofestimates from P1 and P2 (see [7] for some of them), including characterizingthe possible degree of the growth of estimates in terms of the exponents. Thenwe present new results concerning the subfamily P3 of tight estimates. In par-ticular, it turns out that for two-dimensional systems all estimates from P3 arebounded and in addition they turn out to be orthohogonal parallelepipeds. Thisis unlike to two other cases mentioned above because there are two-dimensionalsystems for which all estimates from P1 and P2 are unbounded (these systemsare of different kinds for P1 and P2). The results of numerical simulations arepresented.

The work was supported by the Program of the Presidium of the RussianAcademy of Sciences “Dynamic Systems and Control Theory” under supportof the Ural Branch of RAS (project 12-P-1-1019), by the State Program forSupport of Leading Scientific Schools of Russian Federation (grant 2239.2012.1)and by the Russian Foundation for Basic Research (grant 12-01-00043).

References:

[1] A.B. Kurzhanski, I. Valyi, Ellipsoidal Calculus for Estimation andControl, Birkhauser, Boston, 1997.

[2] R.E. Moore, Methods and Applications of Interval Analysis, SIAM, Phi-ladelphia, 1979.

[3] A.N. Gorban, Yu.I. Shokin, V.I. Verbitskii, Simultaneously dissi-pative operators and the infinitesimal wrapping effect in interval spaces,Vychisl. Tekhnol., 2 (1997), No. 4, pp. 16–48.

[4] A.B. Kurzhanski, P. Varaiya, On ellipsoidal techniques for reachabil-ity analysis, Parts I, II, Optim. Methods Softw., 17 (2002), No. 2, pp. 177–237.

[5] E.K. Kostousova, Outer polyhedral estimates for attainability sets ofsystems with bilinear uncertainty, Prikl. Mat. Mekh., 66 (2002), No. 4,pp. 559–571 (Russian); translation in J. Appl. Math. Mech., 66 (2002),No. 4, pp. 547–558.

82

[6] E.K. Kostousova, State estimation for dynamic systems via parallelo-topes: optimization and parallel computations, Optim. Methods Softw., 9(1998), No. 4, pp. 269–306.

[7] E.K. Kostousova, On the boundedness of outer polyhedral estimates forreachable sets of linear systems. Zh. Vychisl. Mat. Mat. Fiz., 48 (2008),No. 6, pp. 974–989 (Russian); translation in Comput. Math. Math. Phys.,48 (2008), No. 6, pp. 918–932.

Arbitrary precision real interval

and complex interval computations

Walter Kramer

Department of Mathematics and Computer ScienceUniversity of Wuppertal

Wuppertal, [emailprotected]

Keywords: arbitrary precision, complex interval arithmetic, complex inter-val functions, extended interval Newton method

The design and development of two new software libraries for arbitrary pre-cision real interval and complex interval computations are discussed. Theselibraries provide a comprehensive set of basic operations and mathematical func-tions. Their comfortable usage (due to C++ operator and function overload-ing) is demonstrated on challenging examples like an extended interval Newtonmethod to automatically bound all zeros of a given function. The derivativesare computed via algorithmic differentiation. The libraries are open source andfreely available on the net.

References:

[1] W. Kramer and F. Blomquist, Arbitrary precision complex intervalcomputations in C-XSC. In: Parallel Processing and Applied Mathematics(Roman Wyrzykowski , Jack Dongarra , Konrad Karczewski und JerzyWasniewski, Eds.), Springer Verlag, 2012. Lecture Notes in ComputerScience, Volume 7204, pp. 457–466.

83

Decision making

under interval uncertainty

Vladik Kreinovich

Department of Computer ScienceUniversity of Texas at El Paso

El Paso, TX 79968, [emailprotected]

Keywords: interval uncertainty, decision making, utility theory, p-boxes,modal intervals, symmetries, control

To make a decision, we must:

• find out the user’s preference, and

• help the user select an alternative which is the best – according to thesepreferences.

A general way to describe user preferences is via the notion of utility (see,e.g., [7]): we select a very bad alternative A0 and a very good alternative A1;utility u(A) of an alternative A if then defined as the probability p for whichA is equivalent to the lottery in which we get A1 with probability p, and A0

otherwise. One can prove that utility is determined uniquely modulo linear re-scaling (corresponding to different choices of A0 and A1), and that the utilityof a decision with probabilistic consequences is equal to the expected utility ofthese consequences.

Once the utility function u(d) is elicited, we select the decision dopt withthe largest utility u(d). Interval techniques can help in finding the optimizingdecision; see, e.g., [4].

Often, we do not know the exact probability distribution, so we need toextract, from the sample, the characteristics of a distribution which are mostappropriate for decision making. We show that, under reasonable assumptions,we should select moments and cumulative distribution function (cdf). Based ona finite sample, we can only find bounds on these characteristics, so we need todeal with bounds (intervals) on moments [6] and bounds on cdf [1] (a.k.a. p-boxes).

Once we know intervals [u(d), u(d)] of possible values of utility, which deci-sion shall we select? We can simply select a decision d0 which may be optimal,

84

i.e., for which u(d0) ≥ maxdu(d), but there are usually many such decisions;

which of them should be select? It is reasonable to assume that this selectionshould not depend on linear re-scaling of utility; under this assumption, weget Hurwicz optimism-pessimism criterion α · u(d) + (a − α) · u(d) → max [7].The next question is how to select α: interestingly, e.g., too optimistic values(α > 0.5) do not lead to good decisions.

In some situations, it is difficult to elicit even interval-valued utilities. Inmany such situations, there are reasonable symmetries which can be used tomake a decision; see, e.g., [5]. We show how this method works on the exampleof selecting a location for a meteorological tower [3].

Finally, while optimization problems are ubiquitous, sometimes, we need togo beyond optimization: e.g., we need to make sure that the system is control-lable for all disturbances within a given range. In such problems, modal intervals[2] naturally appear. In more complex situations, we need to go beyond modalintervals, to more general Shary’s classes.

References:

[1] S.Ferson, V.Kreinovich, J.Hajagos, W.Oberkampf, L.Ginzburg,Experimental Uncertainty Estimation and Statistics for Data Having In-terval Uncertainty, Sandia National Laboratories, 2007, Publ. 2007-0939.

[2] E. Gardenes et al., Modal intervals, Reliable Computing, 7 (2001),pp. 77–111.

[3] A. Jaimes, C. Tweedie, V. Kreinovich, M. Ceberio, Scale-invariantapproach to multi-criterion optimization under uncertainty, with applica-tions to optimal sensor placement, in particular, to sensor placement inenvironmental research, International Journal of Reliability and Safety, 6(2012), No. 1–3, pp. 188–203.

[4] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to IntervalAnalysis, SIAM, Philadelphia, 2009.

[5] H.T. Nguyen, V. Kreinovich, Applications of Continuous Mathemat-ics to Computer Science, Kluwer, Dordrecht, 1997.

[6] H.T. Nguyen, V. Kreinovich, B. Wu, G. Xiang, Computing Statis-tics under Interval and Fuzzy Uncertainty, Springer Verlag, 2012.

[7] H. Raiffa, Decision Analysis, McGraw-Hill, Columbus, Ohio, 1997.

85

Excluding regions using Sobol sequences

in an interval branch-and-bound method

Bart lomiej Jacek Kubica

Institute of Control and Computation Engineering (of WUT)ul. Nowowiejska 15/19, 00-665 Warsaw, Poland

[emailprotected]

Keywords: interval methods, exclusion regions, Sobol sequences, underde-termined systems, nonlinear equations

Interval branch-and-prune (b&p) methods (also called by a more generalterm, branch-and bound [2]) are commonly used to solve systems of nonlinearequations and several other problems. Their main drawbacks (as for most com-binatorial approaches) are high computational cost and high memory space re-quirements, in the pessimistic case. This computational burden can be avoidedby choosing proper heuristics and policies to adapt the process for a specificproblem. Hence, any improvements or accelerations to the process are veryworthwhile.

The paper is going to consider a preprocessing step of the b&p method, inwhich some infeasible regions are removed from further search. For the systemof equations f(x) = 0, x ∈ x ⊆ Rn we can remove any box z ⊆ x such thatfi(z) > ε or fi(z) < −ε for all z ∈ z (i.e., fi(z) ⊆ [ε,+∞) or fi(z) ⊆ (−∞,−ε])and an arbitrary equation i and some ε > 0.

Tools that are used to solve the above problem include:

• simple computations of interval extension of functions,

• solving the interval tolerance problem (see, e.g., [6]) for the linearizedproblem,

• applying ε-inflation [2] to enlarge the infeasible box, being removed.

Such a procedure – simple and well-known – does not specify how to chooseinitial regions for removal, which is crucial for efficiency. These regions can beconstructed around some “seeds” scattered around the problem domain. The“seeds” can be chosen randomly, but a better approach is to use a deterministiclow-discrepancy sequence [1], e.g., the Sobol sequence [8], also called the LPτ

sequence. Points of this sequence are distributed in a very regular way over the

86

search domain and the method remains deterministic (hence easy to investigate).Also, there are efficient algorithms to generate such sequences [8].

According to the author’s observations, it seems most efficient to choose nseeds, i.e., as many as the number of variables, independently of the number ofequations.

The considered approach seems particularly useful for underdetermined sys-tems, where the solutions are not isolated points, but belong to a continuousset. For such systems, we cannot verify the uniqueness of a solution (as, e.g., in[7]) and – on the other hand – deleting infeasible regions may result in boxes inwhich segments of the solution manifold are easy to verify.

Thanks to this approach, speedups of the rate 30-50% are obtained, at least,for some problems. The paper is going to present a few variants of the methodand its cooperation with the equations systems solver developed in [3]–[5]. Com-putational experiments for examples of underdetermined and well-determinedsystems will be considered. Parallelization of the method will also be investi-gated.

References:

[1] M. Drmota, R.F. Tichy, Sequences, Discrepancies and Applications,Springer-Verlag, Berlin, Heidelberg, 1997.

[2] R.B. Kearfott, Rigorous Global Search: Continuous Problems, KluwerAcademic Publishers, Dordrecht, 1996.

[3] B.J. Kubica, Interval methods for solving underdetermined nonlinearequations systems, Reliable Computing, 15 (2011), pp. 207–217.

[4] B.J. Kubica, Intel TBB as a tool for parallelization of an interval solverof nonlinear equations systems, Tech. rep. no 09-02, ICCE WUT, 2009.

[5] B.J. Kubica, Tuning the multithreaded interval method for solving un-derdetermined systems of nonlinear equations, PPAM 2011 Proceedings,LNCS, accepted.

[6] S.P. Shary, Finite-dimensional Interval Analysis, XYZ, 2010 (in Rus-sian).

[7] H. Schichl, A. Neumaier, Exclusion regions for systems of equations,SIAM Journal of Numerical Analysis, 42 (2004), pp. 383–408.

[8] Sobol sequence generator, http://web.maths.unsw.edu.au/~fkuo/sobol/.

87

Interval methods for computing

various refinements of Nash equilibria

Bart lomiej Jacek Kubica and Adam Wozniak

Institute of Control and Computation Engineering (of WUT)ul. Nowowiejska 15/19,00-665 Warsaw, Poland

[emailprotected]

Keywords: interval methods, game theory, solution concepts, strong Nashequilibria

The game theory has numerous applications in many branches of theoreticaland applied science. One of the basic solution concepts for non-cooperativegames is the idea of a Nash equilibrium [5]. It can be defined as a situation(an assignment of strategies to all players), when it is not beneficial to anyof the players to change their strategy unless others will do so. Such points,however, have several drawbacks – both theoretical (rather strong assumptionsabout the players’ knowledge and rationality) and practical (they can be Pareto-inefficient).

Hence, several “refinements” to the notion have been introduced, includ-ing epsilon-equilibrium, strong Nash equilibrium, manipulated Nash equilibriumand many others.

On the other hand, computing any kind of these solutions can be a hardproblem. In particular, very few computational methods exist for continuousgames.

In our previous paper [3] we proposed an interval algorithm to compute Nashequilibria of a continuous non-cooperative game. In [4] it was shown that theinterval branch-and-bound (b&b) method can be used to compute the enclosureof any set of points that fulfill a given condition, described, by some kind of apredicate formula (see also [2]). But, as all refinements of the Nash equilibriumcan be described this way, computing all of them should be possible, using aversion of the b&b framework.

The paper is going to investigate interval algorithms for computing othersolution concepts for continuous games. Data structures and parallelizationissues will be considered. In particular, the concept of strong Nash equilibrium[1] and some its modifications are going to be analyzed.

88

As an example, we consider a simple and interesting pursuit game, developedby Steinhaus [6,7]. Some variants and modifications of the game (including anincreased number of players) are going to be presented, too.

References:

[1] R.J. Aumann, S. Hart (eds.) Handbook of Game Theory with Eco-nomic Applications, Vol. 1, North-Holland, 1992, Chapter 4.

[2] V. Kreinovich, B.J. Kubica, From computing sets of optima, Paretosets and sets of Nash equilibria to general decision-related set computa-tions, Journal of Universal Computer Science, 16 (2010), pp. 2657–2685.

[3] B.J. Kubica, A. Wozniak, An interval method for seeking the Nashequilibria of non-cooperative games, LNCS, 6068 (2010), pp. 446–455.

[4] B.J. Kubica, A class of problems that can be solved using interval algo-rithms, SCAN 2010 Proceedings, Computing, 94 (2012), pp. 271–280.

[5] J.F. Nash, Equilibrium points in n-person games, Proceedings of NationalAssociation of Science, 36 (1950), pp. 48-49.

[6] H. Steinhaus, Definitions for a theory of games and pursuit, Naval Re-search Logistics Quarterly, 7 (1960), pp. 105–107.

[7] H. Steinhaus, O grach (swobodnie), [Games, an informal talk], StudiaFilozoficzne, 5 (1969), pp. 3–13 (in Polish).

89

Interval approach

to identification of parameters

of experimental process model

Sergey I. Kumkov1 and Yuliya V. Mikushina2

1Institute of Mathematics and Mechanics UrB RAS16, S. Kovalevskaya str., 620219 Ekaterinburg, Russia

2Institute of Organic Synthesis UrB RAS20, S. Kovalevskaya str., 620990 Ekaterinburg, Russia

[emailprotected]

Keywords: interval identification, parameters, model, experimental process

The work considers an application of the general interval analysis methods(e.g., [1]) to a special practical problem of parameters identification of a realexperimental chemical process [2]. In the process, concentration S(t) of peroxideH2O2 is measured versus the time t of the decomposition catalytic reaction forvarious nano-catalysts. Two possible models of the process are investigated.

Model 1. The experimental process is described by the function S(t, C, α,BG) = C exp(αt) + BG. The vector of parameters to be identified is three-dimensional: C > 0 is the initial value of concentration; α < 0 is the activitycoefficient of the first approximation model; BG > 0 is a background value.

Model 2. Here, the describing function is S(t, C, α, β,BG) = C exp(αt +βt2) + BG, where, in comparison with Model 1, the coefficient β of activity ofthe second approximation is introduced (β < 0). So, the vector of parametersto be identified is four-dimensional.

The following input data are given [2]: the sample of noised measurementstk, Sk = S(tk), k = 1, 4; it is assumed that values tk are known exactly, butmeasurements Sk are noised with the total additive errors bounded by modulusas |ek| ≤ emax = 0.035. The experiments have been performed very carefully,with very clear reactants, and small actual measuring errors. As a consequence,the measurements were not distorted by jerks and there are no outliers in thesample. The results of measuring the process for three various catalysts (1 – 3)are given in Fig.1. To show the trends of the processes in Experiments 2 and 3,the samples are approximated (black curves) by the standard regression method

90

using Model 1. For the Experiment 1, the uncertainty intervals Hk of the length2emax are drawn around each measurement: Hk = [Sk − emax, Sk + emax].

The problem of identification is formulated as follows: it is necessary toidentify (to construct) the set of admissible values of model parameters consistentwith the given input data.

We consider the main idea and procedures of the elaborated algorithms forModel 1. The following procedures are performed. By shifting the backgroundparameter BG to the left-hand side and by standard logarithmic operation, theinitial function is transformed to the following function y = ln

(S(t)−BG

)with

linear dependence on the parameter α and a new parameter lnC: y(t, lnC,α) =ln(S(t) − BG

)= lnC + αt; note that the central term in this expression will

be an interval for each tk. Some reasonable a priori interval of the parameterBG is introduced with a grid BGm,m = 1,M. Application of algorithms[3] to constructing the informational set I(lnC,α,BGm) for each node BGm

(together with adjusting the position of the grid, its step, and number of nodes)gives the whole desired informational set I(lnC,α,BG) as a collection (Fig.2) ofits cross-sections

I(lnC,α,BGn)

, n = 1, N over all admissible nodes N of the

adjusted grid, i.e., nodes, for which the cross-section is not empty. For Model 2,the algorithms are similarly repeated for two grids in parameters BG and β.Note that the elaborated approach is significantly faster and more accurate thanones based on application of parallelotopes [1].

91

References:

[1] L. Jaulin, M. Kieffer, O. Didrit, E. Walter, Applied IntervalAnalysys, Springer, London, 2001.

[2] L.A. Petrov, Yu.V. Mikushina, et. al., Catalytic activityof oxide polycristal and nano-size tungsten bronzes produced by electrol-ysis of molten salts, Izv. Acad. Nauk, ser. Chemical, 2011, No. 10,pp. 1951–1954.

[3] S.I. Kumkov, Procession of experimental data on ionic conductivity ofmolten electrolyte by the interval analysis methods, Rasplavy, 2010, No. 3,pp. 86–96.

92

The libieee754 compliance library

for the IEEE 754-2008 standard

Olga Kupriianova and Christoph Lauter

UPMC - LIP6 - Equipe PEQUAN4, place Jussieu, F - 75252 Paris Cedex 05, France

[emailprotected], [emailprotected]

Keywords: reliable floating-point arithmetic, correct rounding, heteroge-neous floating-point operations, IEEE 754-2008

In 1985, the IEEE 754 Standard for Binary Floating-Point Arithmetic [2]provided concise means to achieve portability and provability of Floating-Point(FP) programs. The high level of achieved reliability was the key to its widespreadadoption.

In 2008, a revised version, the IEEE 754-2008 Standard for Floating-PointArithmetic [1], was published. This revision reinforced the reproducibility as-pects of the standard and added a few new operations and concepts, such as dec-imal arithmetic, heterogeneous operations or fused-multiply-and-add (FMA).

As of today, the IEEE 754-2008 standard has already been accepted as thepreferred FP Arithmetic system. For instance, the P1788 working group∗ rec-ognized it as a base for standardized Interval Arithmetic.

However, IEEE 754-2008 is currently not completely supported by Program-ming Languages like C99, nor by Operating Systems, such as GNU/Linux. InC99, some operations are missing and some are only partly compliant withthe standard. For instance, decimal-to-binary conversion in scanf commonlyimplements correct rounding only for round-to-nearest mode or FMA might in-correctly round twice. Complete IEEE 754-2008 compliance is available only onIntel-compatible processors, through a closed-source library provided by Intel†.

For Open Source IEEE 754-2008 compliance, this work proposes thelibieee754 library. The library implements all the 354 operations IEEE 754-2008 mandates for Binary FP Arithmetic in both binary32 and binary64 for-mats. While the library is reasonably fast, speed was not the main purpose but100% standard compliance. All operations support all rounding modes and set

∗cf. http://grouper.ieee.org/groups/1788/†cf. http://software.intel.com/sites/products/documentation/hpc/composerxe/en-us/

2011Update/cpp/lin/cref_cls/common/cppref_libbfp754_ovrvw.htm

93

all flags as required by IEEE 754-2008. The library is reentrant as there is noglobal state other than the global state foreseen by the standard.

The functions in libieee754 performing correctly rounded conversion fromarbitrary length decimal character sequences to the binary FP formats shouldbe highlighted. They support all rounding modes and does not perform anydynamic memory allocation. While the algorithms found in the literature [3] re-sume to one-step correctly rounded decimal-to-binary conversion with unknownmemory consumption limits, the novel algorithm implemented in libieee754

performs decimal-to-binary conversion indirectly in three steps: first, convertfrom decimal to binary and round to a floating-point midpoint, second, exactlyconvert the binary midpoint back to decimal and third, round correctly. Thisallows memory consumption to be known beforehand, avoiding any dynamicmemory allocation.

The algorithm for decimal-to-binary conversion set aside, the most impor-tant difficulty when designing libieee754 was with the rounding-mode, whichcannot be queried by the library code, and with the IEEE 754 flags. EachFP operation needed hence to be chosen with 4 rounding modes and possibleside-effects on flags in mind.

The libieee754 library was completely proven on paper and extensivelytested. The proofs are available for reference. Future work is supposed to bringformal proofs, in a system such as Coq [4].

In the future, libieee754 is supposed to be extended with respect to thebinary128 format, decimal FP Arithmetic and the optional parts of the IEEE754-2008 Standard. Additionally, the library’s code base should be extended toallow for compilation on systems where no hardware floating-point support isavailable and where a complete emulation of all floating-point operations usinginteger instructions will be needed.

References:

[1] IEEE Standard for Floating-Point Arithmetic, IEEE Std 754TM

-2008,IEEE, New York, NY, USA, 2008.

[2] IEEE Standard for Binary Floating-Point Arithmetic, IEEE Std 754-1985,IEEE, New York, NY, USA, 1985.

[3] M. Hack, On intermediate precision required for correctly-rounding deci-mal-to-binary floating-point conversion, Proc. of RNC6, 2004, pp.113–134.

[4] G. Melquiond, Floating-point arithmetic in the Coq system, Proc. ofRNC 8, 2008, pp. 93–102.

94

Monotone and convex interpolation

by weighted quadratic splines

Boris I. Kvasov

Institute of Computational Technologies SD RAS6, Lavrentiev ave.

630090 Novosibirsk, [emailprotected]

Keywords: shape-preserving interpolation, weighted C1 quadratic splines,adaptive choice of shape control parameters, weighted B-splines and controlpoint approximation

We are interested in numerically fitting a curve through a given finite set ofpoints Pi = (xi, fi), i = 0, . . . , N + 1, in the plane, with a < x0 < x1 < · · · <xN+1 = b. These points can be thought of as coming from the graph of somefunction f defined on [a, b]. We are particularly interested in algorithms whichpreserve local monotonicity and convexity of the data (or function). Here, weshall focus only on those algorithms which use C1 piecewise quadratic inter-polants.

Monotone and convex local C1 quadratic splines with perhaps one additionalknot in each subinterval between data points were considered by L.L. Schumaker[7] (see also [2,5], and references therein). In these interactive algorithms thelocation of additional knots allows the user to adjust spline to the data and totake full advantage of the flexibility which quadratic splines permit. Some im-provements of these algorithms were suggested in [1,4]. Very similar algorithmswere also obtained in [9].

In contrast with the previously published algorithms for shape preservingquadratic splines which rely on local schemes, our algorithms are based onglobal weighted C1 quadratic splines. Such splines generalize global quadraticsplines introduced by Yu.N. Subbotin [8] and are similar to weighted C1 cubicsplines [6]. We let the additional knots be the midpoints in each subinterval tohave actually their optimal location.

While there are many methods available for the solution of the shape-preserving interpolation problem (see a very detailed literature review in [3]),the shape parameters are mainly viewed as an interactive design tool for ma-nipulating shape of a spline curve.

95

The main challenge of this paper is to present algorithms that select shapecontrol parameters (weights) automatically. We give two such algorithms: oneto preserve the data monotonicity and other to retain the data convexity. Thesealgorithms based on the sufficient conditions of monotonicity and convexity forC1 quadratic splines and adapt the spline curve to the data geometric behavior.The main point, however, is to determine whether the error of approximation re-mains small under the proposed algorithms. To this end we prove two theoremsto estimate error bounds. We show that by using special choice of shape pa-rameters one can rise the order of approximation. We construct also weightedB-splines and consider control point approximation. Recurrence relations forweighted B-splines offer valuable insight into their geometric behavior.

References:

[1] R.A. DeVore, Z. Yan, Error analysis for piecewise quadratic curve fit-ting algorithms, Comput. Aided Geom. Des., 3 (1986), No. 1, pp. 205–215.

[2] T.A. Foley, Local control of interval tension using weighted splines,Comput. Aided Geom. Des., 3 (1986), No. 1, pp. 281–294.

[3] T.N.T. Goodman, Shape preserving interpolation by curves, In: J. Leves-ley, I. Anderson, J. Mason (eds.) Algorithms for Approximation IV, Uni-versity of Huddersfield, 2002, pp. 24–35.

[4] M.H. Lam, Monotone and convex quadratic spline interpolation, VirginiaJournal of Science, 41 (1990), No. 1, pp. 3–13.

[5] D.F. McAllister, J.A. Roulier, Interpolation by convex quadraticsplines, Math. Comput., 17 (1980), pp. 238–246.

[6] K. Salkauskas, C1 splines for interpolation of rapidly varying data,Rocky Mountain Journal of Mathematics, 14 (1984), No. 1, pp. 239–250.

[7] L.L. Schumaker, On shape preserving quadratic spline interpolation,SIAM J. Numer. Anal., 20 (1983), No. 4, pp. 854–864.

[8] S.B. Stechkin, Yu.N. Subbotin, Splines in Computational Mathemat-ics, Nauka, Moscow, 1976 (in Russian).

[9] V.T. Voronin, Construction of shape preserving splines, Preprint 404,Computing Center, Siberian Branch of USSR Academy of Sciences, Novosi-birsk, 1982, 27 pp. (in Russian).

96

On unboundedness

of generalized solution sets

for interval linear systems

Anatoly V. Lakeyev

Institute of Systems Dynamics and Control Theory SB RAS134, Lermontov ave., 664033 Irkutsk, Russia

[emailprotected]

Keywords: interval linear systems, NP-hardness

We consider systems of linear interval equations of the form

Ax = b,

where A = [A,A] is an interval m×n-matrix, b = [b, b] is an interval m-vector,and x ∈ R

n. The interval matrix and the interval vector are traditionallyunderstood as the sets

A = A ∈ Rm×n | A ≤ A ≤ A , b = b ∈ R

m | b ≤ b ≤ b

(by Rm×n from now on we denote the set of m×n-matrices). It is also assumedthat A ≤ A, b ≤ b, and the inequalities between the matrices and the vectorsare understood elementwise and coordinatewise, respectively.

Following the papers [1], we suppose that an m × n-matrix Λ = (λij),λij ∈ −1, 1, i = 1,m, j = 1, n and an m-vector β = (β1, . . . , βn)⊤,βi ∈ −1, 1, i = 1,m are given along with the interval m×n-matrix A and theinterval m-vector b. The matrix A = (aij) is decomposed into the two matrices

A∃ = (a∃ij) and A∀ = (a∀

ij) so that

a∃ij =

aij , if λij = 1,

0, if λij = −1,a∀ij =

0, if λij = 1,

aij , if λij = −1.

Similarly, let us decompose the vector b = (b1, . . . , bm)⊤ into the two vectors

b∃ = (b∃1 , . . . , b∃m)⊤ and b∀ = (b∀1 , . . . , b

∀m)⊤

such that

b∃i =

bi, if βi = 1,0, if βi = −1,

b∀i =

0, if βi = 1,bi, if βi = −1.

97

It is furthermore obvious that A = A∀ + A∃, b = b∀ + b∃.Definition (S.P.Shary [1]). For a given quantifier matrix Λ and a quantifier

vector β, the generalized AE-solution set of the type Λβ is

ΞΛ,β(A, b) =x ∈ R

n | (∀A′ ∈ A∀ )(∀b′ ∈ b∀ )

(∃A′′ ∈ A∃ )(∃b′′ ∈ b∃ )( (A′ +A′′)x = b′ + b′′). (1)

The main purpose of our paper is to inquire into the algorithmic complexity ofthe problem that arises in connection with these sets:

Problem. Find out whether the set (1) is unbounded or not.In the rest of the paper, for the two m×n-matrices A = (aij) and B = (bij),

by A B we will denote their Hadamard product A B = (aijbij). Using thewell-known Oettli-Prager theorem, it is possible to obtain Oettli-Prager-typedescription of the generalized solution sets.

For any given Λ and β, the equality

ΞΛ,β(A, b) = x ∈ Rn | |Acx− bc| ≤ (Λ ∆)|x| + β δ ,

holds, where Ac = 12 (A+A), ∆ = 1

2 (A−A), bc = 12 (b+ b), δ = 1

2 (b− b). Usingthis description, we obtain the following

Proposition. The set ΞΛ,β(A, b) is unbounded if and only if for somey ∈ Q = x ∈ Rn | xi ∈ −1, 1, i = 1, n there exists a solution to thefollowing system of linear inequalities (where Ty = diagy1, . . . , yn)

−(Λ ∆)Tyx− β δ ≤ Acx− bc ≤ (Λ ∆)Tyx+ β δ, Tyx ≥ 0,

−(Λ ∆)Tyz ≤ Acz ≤ (Λ ∆)Tyz, Tyz ≥ 0,∑i=n

i=1 yizi ≥ 1.(2)

Theorem. If the functions Λ, β are easily computable and 1-saturate (thedefinition can be found in [2]) then the Problem 1 is NP-complete.

In particular, it follows from the theorem that if P 6= NP then there doesnot exist a criterion of unboundedness of the AE-solution set which is betterthan checking solvability for 2n systems of the form (2).

References:

[1] S.P. Shary, A new technique in systems analysis under interval uncer-tainty and ambiguity, Reliable Computing, 8 (2002), No. 5, pp. 321–418.

[2] A.V. Lakeyev, Computational complexity of estimation of generalizedsolution sets for interval linear systems, Computation Technologies, 8 (2003),No. 1, pp. 12–23.

98

There’s no reliable computing without

reliable access to rounding modes

Christoph Lauter and Valerie Menissier-Morain

LIP6 – UPMC, 4 place Jussieu, 75252 Paris Cedex 5, [emailprotected], [emailprotected]

While approximate answers are accepted for pure Floating-Point Arithmetic(FPA), Interval Arithmetic (IA) is supposed to give reliable results. Indeed IAnever lies as it provides lower and upper bounds that provably encompass thetrue result. Basic IA achieves this enclosure property by taking all Floating-Point (FP) roundings into account, rounding lower bounds down () and upperbounds up (), or inflating the round-to-nearest result by a machine epsilon[6]. E.g. interval addition [a, b] + [c, d] is implemented as [(a+ c),(b+ d)].

However, basic IA often cannot be used as such [4,5]. First, each basic IAoperation uses both directed rounding modes (RM), hence requiring at least oneRM change. As this is an expensive operation on most processors requiring forinstance a pipeline flush, it should be avoided as often as possible. Second, basicIA provides the elementary operations such as addition and multiplication only,whereas most modern scientific computing needs high-level operations such asmatrix and vector addition and multiplication or linear system solving.

In the world of pure FPA, all these operations are available in fast andhighly tuned math libraries. The Intel Math Kernel Library (MKL) [1] is oneof the most advanced and widely used libraries for this purpose. In a decade ofexistence, with a whole team working on it, it has reached significant maturity.

MKL did have high-level IA, particularly linear solvers, between 2005 and2008 [2]. Then this part suddenly disappeared. Nowadays MKL provides FPAonly and implementations for IA would have to go through the same decade ofdifficulties MKL has gone through to get from basic IA to high-level operations.

Recent papers and software tools such as Intlab therefore try to reuse theFPA in MKL for IA by applying high-level reasoning on the code [4,5]. Forinstance, for a matrix-matrix-product, MKL with the RM set to round-downfor all operations, should enable us to compute a matrix that is a lower bound forthe exact matrix product. By clever rewrites of IA formulas and a small numberof RM changes before calls to MKL, interval enclosures for IA operations canhence be computed. Inflating the round-to-nearest result is not possible formatrices as there is no “machine epsilon” for whole matrix operations.

99

Here is where the trouble arrives: the reliability of the IA results boils downto setting the RM for all subsequent FP operations reliably. Indeed suppose wework in Matlab/Intlab (for other tools, like Maple, Mathematica, it is similar),we have a mix of C code, MKL and specific Matlab or Intlab code.

For C code, fesetround exists. Matlab uses it, too. However e.g. printinginstructions might affect the RM again. How a RM change is propagated fromone thread or node of a cluster to all others is unspecified in the C standard.

In MKL the RM can be specified only in the VML (Vector Math Library)part and any multi-threading and clustering behavior is not documented. Fur-ther MKL executes for the same function different codes depending on wordlength, the processor vendor or the possibility to use the x87 co-processor orthe SSE2 instruction set. In the generic code the internal computations are es-sentially performed in extended precision and then converted back to double orsingle. There is no known guarantee that the result actually is a reliable bound.

Moreover as mentioned in a March 2012 message on the reliable computing

mailing list by Frederic Goualard, the RM can change independently of the onespecified by the programmer and obviously independently of the prerequisitesof other libraries. The quality of the final result is seriously compromised.

With such a mess, how can IA be called reliable? We cannot know in eachpiece of code what will be the RM. So what do we know for a mix of codes?

We are thus calling for reliable support for setting the RM and clear docu-mentation in all the tools mentioned, as they are MKL, Matlab, Intlab, Maple,Gap. Otherwise publishing papers on reliable IA seems to be a waste of time.

References:

[1] Intel R© Math Kernel Library 10.3 – Documentation, http://software.intel.com/sites/products/documentation/hpc/mkl/mklman/index.htm.

[2] Intel R© Math Kernel Library – reference Manual,http://www.nsc.ru/interval/MKLmanual.pdf, September 2007.

[3] MATLAB, version 7.14 (R2012a), The MathWorks Inc., Natick, Massachusetts,2012, http://www.mathworks.fr/help/techdoc/.

[4] S.M. Rump, INTLAB – INTerval LABoratory, In Developments in ReliableComputing (Tibor Csendes, ed.), Dordrecht, 1999, pp. 77–104.

[5] C.R. Milani, M. Kolberg and L.G. Fernandes, Solving dense interval linearsystems with verified computing on multicore architectures, In Proc. of the 9thIntern’l Conf. on HPC for Comput’l Science, 2011, pp. 435–448.

[6] W. Hofschuster and W. Kramer, FI LIB, eine schnelle und portable Funk-tionsbibliothek fur reelle Argumente und reelle Intervalle im IEEE-double-Format,Tech. Rep. 98/7, Univ. Karlsruhe, 1998.

100

A framework of high precision

eigenvalue estimation for selfadjoint

elliptic differential operator

Xuefeng Liu and Shin’ichi Oishi

Waseda University3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan

[emailprotected]

Keywords: eigenvalue problem, finite element method, homotopy method,Lehmann-Goerisch’s theorem

Based on several fundamental preceding research results[1–4], this talk aimsto propose a framework to provide high precision bounds for the leading eigen-values of selfadjoint elliptic differential operator over polygonal domain:

−div (a∇u) + cu = λu in Ω, u = 0 on ∂Ω (1)

where a ∈ C1(Ω) and c ∈ L∞(Ω). The proposed framework has the followingfeatures:

• the domain of eigenvalue problem in consideration can be of free shape,which is because the finite element method with nice flexibility is success-fully adopted in bounding the eigenvalues [2];

• it can deal with general selfadjoint elliptic operator, where the homotopymethod [1] plays an important role;

• the obtained eigenvalue bounds have high precision, which is due to Lehmann-Goerisch’s theorem [3,4] and well constructed approximating base func-tion.

The eigenvalue problem (1) is solved by considering the weak formulation:

Find u ∈ H10 (Ω), λ ∈ R, s.t. (a∇u,∇v) + (cu, v) = λ(u, v), ∀v ∈ H1

0 (Ω) , (2)

where H10 (Ω) is a kind of Sobolev function space. Let us denote the eigenvalues

by λ1 ≤ λ2 ≤ · · · . The high precision bounds for the leading eigenvalues λi

101

are obtained in three steps.

Step 1: the base eigenvalue problem −∆u = λu is solved approximately ina certain finite element space with approximate eigenvalues as

λ1,h ≤ λ2,h ≤ · · · ≤ λn,h ,

and an error estimation for the approximate eigenvalues is given as below [2],

λi,h1 +Mhλi,h

≤ λi ≤ λi,h, (i = 1, · · · , n) , (3)

where Mh is computable quantity depending on domain shape and mesh size.Step 2: the eigenvalue bounds for general elliptic operator in consideration is

obtained by applying the homotopy method [2], which estimates the eigenvaluevariation in transforming the base problem −∆u = λu to the one wanted. Ifthe domain is convex, this step can be simplified by extending the result of (3).

Step 3: the Lehmann-Goerisch’s theorem [3,4] is applied to sharpen thebounds along with proper selection of base function to approximate the eigen-function. To deal with the domain of free shape, the singular base functioncorresponding to the singular part of eigenfunction, and Bezier patch over tri-angulation of domain are used.

In the talk, we will also illustrate several examples to demonstrate the effi-ciency of our proposed framework.

References:

[1] M. Plum, Bounds for eigenvalues of second-order elliptic differential op-erators, The Journal of Applied Mathematics and Physics (ZAMP), 42(1991), No. 6, pp. 848–863.

[2] X. Liu, S. Oishi, Verified eigenvalue evaluation for Laplacian over polyg-onal domain of arbitrary shape, submitted to SIAM Journal on NumericalAnalysis, 2012.

[3] N.J. Lehmann, Optimale eigenwerteinschließungen, Numerische Mathe-matik, 5 (1963), No. 1, pp. 246–272.

[4] H. Behnke, F. Goerisch, Inclusions for eigenvalues of selfadjoint prob-lems, Topics in Validated Computations (ed.J. Herzberger), North-Holland,Amsterdam, 1994, pp. 277–322.

102

Comparisons of implementations of

Rohn modification in PPS-methods

for interval linear systems

Dmitry Yu. Lyudvin, Sergey P. Shary

Institute of Computational Technologies SD RAS6, Lavrentiev ave.

630090 Novosibirsk, [emailprotected], [emailprotected]

Keywords: interval analysis, interval linear algebraic system, method ofpartitioning of the parameter set, Rohn modification

We consider interval linear systems of the form Ax = b with interval matricesA ∈ IR

n×n and interval right-hand side vectors b ∈ IRn. The interval system

is understood as a family of point linear systems Ax = b with A ∈ A andb ∈ b. The solution set of the interval linear system is defined as the setΞ(A, b) =

x ∈ Rn | (∃A ∈ A)(∃ b ∈ b)(Ax = b )

, formed by solutions to

all the point systems Ax = b with A ∈ A and b ∈ b. We are interested in theoptimal enclosure of the solution set to the interval linear system, i.e. the leastinclusive interval vector that contains the solution set.

For the solution of the above problem, we use the parameter partitioningmethods or, shortly, PPS-methods developed in [1, 2]. The essence of PPS-methods is sequential refining the estimates of the solution set through adaptivepartitioning of the interval parameters of the system under solution.

The purpose of the present work was to compare various implementationsof PPS-methods that use

1) Rohn’s technique for eliminating unpromising vertex combinations;

2) estimate monotonicity test, with respect to the components of the matrixand the right-hand side vector of the system;

3) various enclosure methods for interval linear systems;

4) various ways of processing the so-called working list, in which the resultsof the partition of the interval linear system are stored.

Special attention is paid to the modification based on Rohn’s technique, whichis the most complex, laborious, but the most efficient one for the systems ofmoderate dimensions.

103

J. Rohn revealed that, if the matrix A is regular, then both minimal andmaximal component-wise values of the points from the solution set are at-tained at the set of no more than 2n so-called extreme solutions to the equation|(midA)x − mid b| = radA · |x| + radb [3]. Our INTLAB code linppse [4]implements a modification of the general PPS-methods based on this result [2].We have carried out numerical tests and examined the efficiency of the algorithmdepending on the properties of the interval matrix of the system.

Also, we have investigated various versions of PPS-methods, which used,as procedures for computing basic enclosures, Krawczyk method, modifiedKrawczyk method with epsilon-inflation, interval Gauss method, interval Gauss-Seidel iteration, Hansen-Bliek-Rohn procedure, verifylss procedure from thetoolbox INTLAB. Experimental results demonstrated that, amongst the abovelisted techniques, Hansen-Bliek-Rohn procedure with preliminary precondition-ing is the best enclosure for PPS-methods.

Based on numerical experiments, we elaborate practical recommendationson how to optimize, within the PPS-methods, processing the working list (of“systems-descendants”). Finally, we present the results of comparisons betweentwo computer codes for computing optimal enclosures of the solution set tointerval linear systems, namely, our linppse [4] and verintervalhull fromRohn’s VERSOFT package [5].

References:

[1] S.P. Shary, A new class of algorithms for optimal solution of intervallinear systems, Interval Computations, 4 (1992), No. 2, pp. 18–29.

[2] S.P. Shary, Parameter partition methods for optimal numerical solutionof interval linear systems, Computational Science and High-PerformanceComputing III. The 3rd Russian-German advanced research workshop,Novosibirsk, Russia, 23–27 July 2007 (E. Krause, Yu.I. Shokin, M. Resch,N.Yu. Shokina, eds.), Springer, Berlin-Heidelberg, 2008, pp. 184–205.

[3] M. Fiedler, J. Nedoma, J. Ramik, J. Rohn, K. Zimmermann, Lin-ear optimization problems with inexact data, Springer, New York, 2006.

[4] The program for the optimal (exact) componentwise estimation of theunited solution set to interval linear system of equations,http://www.nsc.ru/interval/Programing/MCodes/linppse.m

[5] Verification software in MATLAB/INTLAB,http://www.cs.cas.cz/rohn/matlab

104

Componentwise inclusion for solutions

in least squares problems and

underdetermined systems

Shinya Miyajima

Faculty of Engineering, Gifu University1-1 Yanagido, Gifu-shi, Gifu 501-1193, Japan

[emailprotected]

Keywords: least squares problems, underdetermined systems, minimal2-norm solution, numerical enclosure, verified error bound

In this talk, we are concerned with the accuracy of numerically computedresults for solutions in least squares problems

minx∈Rn

‖b−Ax‖2, A ∈ Rm×n, b ∈ R

m, (1)

and minimal 2-norm solutions in underdetermined systems

Ax = b, A ∈ Rn×m, b ∈ R

n, (2)

where m ≥ n and A has full rank. The problems (1) and (2) arise in many ap-plications of scientific computations, e.g. linear and nonlinear programming [1],statistical analysis, signal processing, computer vision [2] and so forth. It is wellknown (e.g. [3,4]) that the solutions in (1) and (2) can be written as A+b, whereA+ denotes the pseudo-inverse of A.

We consider in this talk numerically enclosing A+b, specifically, computingerror bounds for x using floating point operations, where x denotes a numericalresult for A+b. It is well known (e.g. [4]) that A+b in (1) and (2) can becomputed by solving the augmented linear systems

(A −ImOn AT

)(xw

)=

(b0

)and

(AT −ImOn A

)(wx

)=

(0b

), (3)

respectively, where Im and On denote the m×m identity matrix and the n×nzero matrix, respectively, since these systems imply x = A+b. The INTLAB[5] function verifylss encloses A+b in (1) and (2) by enclosing solutions in(3), supplies componentwise error bounds, and requires O((m+n)3) operations.

105

The VERSOFT [6] routine verlsq returns the enclosure of A+b in (1) and (2)by computing an interval matrix including A+ and gives componentwise errorbounds. The author [7] has proposed algorithms for enclosing A+b in (2), whichgives normwise error bounds. In this algorithm, (3) is not utilized, i.e. (2)is directly considered, so that the computational cost of this algorithm is notO((m + n)3) but O(m2n) operations. Recently Rump [8] proposed fast algo-rithms for enclosing A+b in (1) and (2), which return normwise error bounds.

The purpose of this talk is to propose algorithms for enclosing A+b in (1) and(2) which supply componentwise error bounds and are as fast as the algorithmsin [8]. These algorithms do not assume but prove A to have full rank. Weprove that the obtained error bounds by the proposed algorithms are equal orsmaller than those by the algorithms in [8], and finally compare the proposedalgorithms with verifylss, verlsq and the algorithms in [7,8] through somenumerical results.

References:

[1] J. Nocedal, S.J. Wright, Numerical Optimization, Springer-Verlag,New York, 1999.

[2] B. Triggs, P. McLauchlan, R. Hartley, A. Fitzgibbon, Bundleadjustment-a modern synthesis, Lect. Notes Comput. Sc., 1883 (2000),pp. 153–177.

[3] G.H. Golub, C.F. Van Loan, Matrix Computations, third ed., TheJohns Hopkins University Press, Baltimore and London, 1996.

[4] N.J. Higham, Accuracy and Stability of Numerical Algorithms, seconded., SIAM Publications, Philadelphia, 2002.

[5] S.M. Rump, INTLAB – INTerval LABoratory, in Developments inReliable Computing (T. Csendes, ed.), Kluwer Academic Publishers,Dordrecht, 1999, pp. 77–104.

[6] J. Rohn, VERSOFT: Verification software in MATLAB / INTLAB, http://uivtx.cs.cas.cz/~rohn/matlab/.

[7] S. Miyajima, Fast enclosure for solutions in underdetermined systems,J. Comput. Appl. Math., 234 (2010), pp. 3436–3444.

[8] S.M. Rump, Verified bounds for least squares problems and underdeter-mined linear systems, SIAM J. Matrix Anal. Appl., 33 (2012), pp. 130–148.

106

Verified computations for all generalized

singular values

Shinya Miyajima

Faculty of Engineering, Gifu University1-1 Yanagido, Gifu-shi, Gifu 501-1193, Japan

[emailprotected]

Keywords: singular values, generalized singular values, verified bounds

A matrix factorization having great importance in numerical linear algebrais the singular value decomposition (SVD) (e.g. [1]), which is based on thefollowing theorem:

Theorem 1 Let A ∈ Rm×n be given and q := min(m,n). There exist orthogo-nal U ∈ Rm×m and V ∈ Rn×n such that

UTAV = Σ = diag(σ1, . . . , σq),

σ1 ≥ · · · ≥ σr∗ > σr∗+1 = · · · = σq = 0, r∗ = rank(A).

The nonnegative real numbers σi, i = 1, . . . , q are called the singular values ofA, which play important roles in application areas. It is well known that σ2

i arethe eigenvalues of the symmetric pencils ATA− λIn and AAT − λIm, where Indenotes the n× n identity matrix.

Van Loan [2] generalized the SVD. This generalization is called the general-ized singular value decomposition (GSVD) and based on the following theorem:

Theorem 2 Let A ∈ Rm×n with m ≥ n and B ∈ Rp×n be given and q :=min(p, n). There exist orthogonal U ∈ Rm×m and V ∈ Rp×p and a nonsingularX ∈ Rn×n such that

UTAX = ΣA = diag(c1, . . . , cn), V TBX = ΣB = diag(s1, . . . , sq),

0 ≤ c1 ≤ · · · ≤ cn ≤ 1, 1 ≥ s1 ≥ · · · ≥ sr∗ > sr∗+1 = · · · = sq = 0,

r∗ = rank(B), c2i + s2i = 1, i = 1, . . . , q.

The quotients µj = cj/sj, j = 1, . . . , r∗ are called the generalized singularvalues of A and B. Note that µ2

j are the eigenvalues of the symmetric pencil

ATA− λBTB. Although more general definition of the GSVD can be found in

107

[3], in this talk, we define the GSVD by Theorem 2 for simplicity. The GSVDis a tool used in many applications, such as damped least squares, least squareswith equality constraints, certain generalized eigenvalue problems and weightedleast squares [2].

In this talk, we consider computing verified bounds of all the singular valuesand generalized singular values. For the singular values, Oishi [4] first proposedsuch an algorithm utilizing numerical full SVD. Recently Rump [5] proposeda fast algorithm which utilizes not the full SVD but the eigen-decomposition.The VERSOFT [6] routine versingval encloses all the singular values utilizingan augmented matrix. For the generalized singular values, an algorithm forcomputing verified bounds of cj , sj and j-th columns of U , V and X for specifiedj ∈ 1, . . . , r∗ has been proposed in [7]. As long as the author know, on theother hand, an algorithm giving verified bounds of µj for all j = 1, . . . , r∗ hasnot been known.

The purpose of this talk is to propose algorithms for computing verifiedbounds of all the singular values or generalized singular values. For the singularvalues, we propose an algorithm which is faster than the algorithms in [4,5] andversingval. We extend this algorithm to the generalized singular values andpropose two algorithms. The first and second algorithms are applicable if BTBand ATA are nonsingular, respectively. We do not assume but prove thesenonsingularities during the executions of these algorithms. Numerical resultsshow the properties of the proposed algorithms.

References:

[1] G.H. Golub, C.F. Van Loan, Matrix Computations, third ed., The JohnsHopkins University Press, Baltimore and London, 1996.

[2] C.F. Van Loan, Generalizing the singular value decomposition, SIAM J. Nu-mer. Anal., 13 (1976), No. 1, pp. 76–83.

[3] C.C. Paige, M.A. Sauders, Towards a generalized singular value decomposi-tion, SIAM J. Numer. Anal., 18 (1981), No. 3, pp. 398–405.

[4] S. Oishi, Fast enclosure of matrix eigenvalues and singular values via roundingmode controlled computation, Linear Algebra Appl., 324 (2001), pp. 133–146.

[5] S.M. Rump, Verified bounds for singular values, in particular for the spectralnorm of a matrix and its inverse, BIT Numer. Math., 51 (2011), No. 2, pp. 367–384.

[6] J. Rohn, VERSOFT: Verification software in MATLAB / INTLAB, http://uivtx.cs.cas.cz/~rohn/matlab/.

[7] G. Alefeld, R. Hoffmann, G. Mayer, Verification algorithms for generalizedsingular values, Math. Nachr., 208 (1999), pp. 5–29.

108

Information support of scientific

symposia

Yurii I. Molorodov

Institute of Computational Technologies SB RAS6, Lavrentiev ave.

630090 Novosibirsk, [emailprotected]

Keywords: information systems, interval arithmetic, engineering

In information support of science, an important point is organization ofregular meetings and discussions of researchers working in specific fields. Inparticular, this is critical for scientific computing, computer arithmetic, andverified numerical methods, where one should have assess to the achievements,to see trends and to predict the prospects of this area of knowledge. That isthe purpose of the current 15’th GAMM-IMACS International Symposium onScientific Computing, Computer Arithmetic and Verified Numerical Computa-tions, which will be held in Novosibirsk on September 23–29, 2012.

The events, such as SCAN’2012, are preceded by a large amount of prepara-tory work performed by the organizers proper and many other people [1, 2].The first step is to initiate the conference, formulate its goals and objectives, itsscope, determine its time and venue, form an organization team. The compe-tence of the program committee is to identify the “content” of the conference,the specificity of the submissions to be presented and discussed. These commit-tees are responsible for the overall success of the conference.

The purpose of the second stage is the notification of all potentially inter-ested individuals and organizations about the forthcoming conference, its scope,venue, format and dates, conditions of participation. To do this, the organiz-ers use a wide range of various means: putting the information onto electronicbulletin boards devoted to the relevant topics, direct mailing, printing and dis-tributing leaflets, etc. At this stage, the availability of information plays acrucial role, so that it becomes necessary to maintain a web-site of the confer-ences that publishes and updated promptly all the information, including newsand ads (in our case, this is http://conf.nsc.ru/scan2012).

At the next stage, the organizers analyze and process the input information.In its flow, two major components can be identified: applications from potentialparticipants, and abstracts of the submissions. A preliminary qualitative and

109

quantitative assessments is made in order to determine a general outline of theforthcoming meeting. The place of each submitted abstract within the programof the meeting is determined.

At the beginning of the fourth stage, after the pre-appointed time elapses,the organizers stop receiving abstracts and turn to their analysis. Peer-reviewingof the submissions is usually performed by a Program Committee, consisting ofexperts in the field, that evaluate the submissions according to several criteria:originality of the results, the quality of presentation, relevance of the work, andothers. Often, within the overall scope of the conference, there exist severaldifferent branches, and the corresponding submissions are to be presented anddiscussed separately. It is the task of the program committee to co-ordinate suchbranches and conduct the overall scientific policy. As the result of this stage,a pool of accepted submissions is formed that can be a basis for compiling apreliminary working program of the meeting. At the end of the fourth stage, apreliminary program of the meeting is prepared as well as the overall activityplan, which are published on the conference website. Also, the organizing com-mittee makes and prints the volume of abstracts to be distributed among theconference participants during the on-desk registration.

The main part of the conference begins with registration of the arrived par-ticipants. The organizers should be aware in advance of their intention to stayin hotels of a class and provide the opportunity. At this stage, a large number ofvarious problems may occur. Much of them should be predicted and preventedat the preparatory stages, although they cannot be totally eliminated.

After completion of the main stage, the final part of the scientific meetingcomes, when the organizers should summarize the overall results of the confer-ence and make them publicly available for future use. This traditionally amountsto publication of the conference proceedings, either in paper or electronic form.For SCAN’2012, the proceedings will be published in the open electronic journalReliable Computing [3].

References:

[1] A.M. Fedotov, A.E. Guskov, Yu.I. Molorodov, Conference Infor-mation system, http://www/sbras.ru/ws/

[2] A.E. Guskov, Semantic Web: Theory and Practice, LAP – LAMBERTAcademic Publishing, 2005.

[3] Reliable Computing (an open electronic journal),http://interval.louisiana.edu/reliable-computing-journal

110

Towards an efficient implementation

of CADNA in the BLAS: example

of the routine DgemmCADNA

Sethy Montan1,2, Jean-Marie Chesneaux2, Christophe Denis1,Jean-Luc Lamotte2

1EDF R&D - Departement SINETICS1, Avenue du general de Gaulle, 92141 Clamart Cedex – France

2Laboratoire d’Informatique de Paris 6 - Universite Pierre et Marie Curie,4 place jussieu, 75252 Paris Cedex 05 – France

[emailprotected]

Keywords: CADNA, discrete stochastic arithmetic, CESTAC, BLAS

Several approximations occur during a numerical simulation : physical phe-nomena are modelled using mathematical equations, continuous functions arereplaced by discretized ones and real numbers are replaced by finite-precisionrepresentations (floating-point numbers). The use of the IEEE-754 arithmeticgenerates round-off errors at each elementary arithmetic operation. By accu-mulation, these errors can affect the accuracy of computed results, possiblyleading to partial or total inaccuracy. The effect of these rounding errors can beanalyzed and studied by some methods like forward/backward analysis, inter-val arithmetic or stochastic arithmetic (which is implemented in the CADNAvalidation tool).

A numerical verification of industrial codes, such those that are developed atEDF R&D –the French provider of electricity–, is required to estimate the pre-cision and the quality of computed results, even more for code running in HPCenvironments where millions instructions are performed each second. Theseprograms usually use external libraries (MPI, BLACS, BLAS, LAPACK) [1].In this context, it is required to have a tool as nonintrusive as possible to avoidrewriting the original code. In this regard, the CADNA library appears to beone of the promising approach for industrial applications.

The CADNA library, developed by the Laboratoire d’Informatique de Paris6, enables us to estimate round-off error propagation using a probabilistic ap-proach in any simulation program (written in C/C++ or Fortran) and to con-trol its numerical quality by detecting numerical instabilities that may occurat run time [2]. CADNA implements Discrete Stochastic Arithmetic which is

111

based on a probabilistic model of round-off errors (this arithmetic is definedwith the CESTAC Method). CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. However, aproblem remains: stochastic types are not compatible with the aforementionedlibraries. It is, therefore, necessary to develop some extensions for these externallibraries.

We are interested in an efficient implementation of the BLAS routine xGEMMcompatible with CADNA. We have called this new routine DgemmCADNA. TheBLAS (Basic Linear Algebra Subprograms) are routines that provide standardbuilding blocks for performing basic vector and matrix operations and xGEMMis the routine which goal is to perform matrix multiplication [5]. The implemen-tation of a basic algorithm for matrix product compatible with stochastic typesleads to an overhead greater than 1000 for a matrix of 1024*1024 compared tothe standard version and commercial versions of xGEMM. This overhead is dueto the use of stochastic types, the rounding mode which changes randomly ateach elementary operation (×, /,+,−), and a non optimized use of the memory(cache and TLB misses).

We will present different solutions to reduce this overhead and the resultswe have obtained. In order to improve the hierarchical memory usage, specialdata structures (Block Data Layout) are used. This allows us to improve thememory performance to reduce cache and TLB misses. A new implementation ofCESTAC Method has been introduced to reduce the overhead due to the randomrounding mode. Finally, we have obtained an overhead about 25 compared toGotoBLAS in a sequential mode.

We will also present, briefly, new extensions for CADNA : CADNA MPIand CADNA BLACS which allow to use stochastic data in programs using thecommunications standard routines (MPI or BLACS).

References:

[1] Ch. Denis and S. Montan, Numerical verification of industrial numer-ical codes, ESAIM Proceedings, 35 (March 2012), pp. 107–113.

[2] F. Jezequel, J.-M. Chesneaux, and J.-L. Lamotte, A new versionof the CADNA library for estimating round-off error propagation in For-tran programs, Computer Physics Communications, 181, No. 11 (2010),pp. 1927–1928.

[3] K. Goto, and R.A. Van De Geijn, High-performance implementa-tion of the level-3 BLAS, ACM Transactions on Mathematical Software(TOMS), 35, No. 1 (2008), (14 pages).

112

[4] N.J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd ed.,Society for Industrial and Applied Mathematics, Philadelphia, PA, USA,2002.

[5] Basic Linear Algebra Technical Forum Standard, August 2001.

Verification methods for linear systems

on a GPU

Yusuke Morikura1, Katsuhisa Ozaki2 and Shin’ichi Oishi3

1Graduate School of Fundamental Science and Engineering, Waseda University3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan

2Department of Mathematical Sciences, Shibaura Institute of Technology307 Fukasaku, Minuma-ku, Saitama-shi, Saitama 337-8570, Japan

3Faculty of Science and Engineering, Waseda University3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan

[emailprotected]

Keywords: linear systems, verified numerical computations, a priori errorestimate, GPGPU

This talk discusses a verification method for linear systems:

Ax = b

where A is a real n× n dense matrix and b is a real n-vector. The verificationmethod means a method which outputs an error bound between a numericalsolution and an exact solution by floating-point computations. The aim of thistalk is to propose a verification method for linear systems suited for a GPU.

The GPU is used for not only acceleration of building of images but alsofor numerical computations since the computational performance is very high.Recently, useful toolboxes and libraries for GPGPU (General-Purpose Computa-tion on Graphics Processing Unit) have been developed, for example, MATLABParallel Computing Toolbox, JACKET, MAGMA and so forth.

Several verification methods for linear systems have been developed for adense coefficient matrix [1, 2, 3]. Many verification methods require switches

113

of rounding modes defined by the IEEE 754 standard [4]. However, since theGPU (Graphics Processing Unit) does not have no dynamically configurablerounding mode [5], the methods of [1, 3] cannot be implemented on the GPUstraightforwardly. To overcome the problems, we first improved Ogita-Rump-Oishi’s error estimation [2] by using new floating-point error estimations byRump [6]. Our algorithm does not switch rounding modes, namely, it worksonly in default rounding mode on GPGPU (rounding to nearest).

Next, an amount of device (GPU) memory is little compared to that of host(CPU) memory in many cases. For example, Tesla C2070 by NVIDIA Corpo-ration has 6 Gbytes memory although CPU has much more working memory(recently, amount of memory installed for a CPU can be over 48 Gbytes). There-fore, we apply blockwise computations to reduce the amount of working memoryof GPU. Data transfer of block matrices from CPU to GPU and from GPU toCPU is required for blockwise computations and its transfer speed is slow dueto low bandwidth. However, numerical results illustrate that computationaltimes using blockwise computation are only 10.7 percent slower than that byusing no blockwise computation. Therefore, blockwise computation does notsignificantly slow down the computational performance. The error bound byour algorithm is twice or three times better than that by [2].

References:

[1] S. Oishi, S. M. Rump, Fast verification solutions of matrix equations,Numer. Math., 90 (2002), No. 4, pp. 755–773.

[2] T. Ogita, S. M. Rump, S. Oishi, Verified Solutions of Linear Systemswithout Directed Rounding, Technical Report 2005–04, Advanced ResearchInstitute for Science and Engineering, Waseda University, Tokyo, Japan,2005.

[3] T. Ogita, S. Oishi, Fast verification method for large-scale linear sys-tems, IPSJ Transactions, 46, No. SIG10(TOM12) (2005), pp. 10–18.

[4] ANSI/IEEE Std 754–1985 : IEEE Standard for Binary Floating-PointArithmetic, New York, 1985.

[5] NVIDIA, CUDA C Programming Guide, http://developer.nvidia.

com/nvidia-gpu-computing-documentation

[6] S.M. Rump, Error estimation of floating-point summation and dot prod-uct, BIT Numerical Mathematics, 52 (2012), No. 1, pp. 201–220.

114

Approach based on instruction selection

for fast and certified code generation

Christophe Mouilleron1,2,3, Amine Najahi1,2,3, Guillaume Revy1,2,3

1Univ. Perpignan Via Domitia, DALI, F-66860, Perpignan, France2Univ. Montpellier II, LIRMM, UMR 5506, F-34095, Montpellier, France

3CNRS, LIRMM, UMR 5506, F-34095, Montpellier, Franceamine.najahi,christophe.mouilleron,[emailprotected]

Keywords: fixed-point arithmetic, automatic code generation, instructionselection, numerical certification

Floating-point arithmetic [1] has become ubiquitous in the specification andimplementation of programs, including those targeted at embedded systems.However, for the sake of chip area or power consumption constraints, some ofthese embedded systems are still shipped with no floating-point unit. In thiscase, only integer arithmetic is available at the hardware level. Hence, to runfloating-point programs, we need either to use a library that emulates floating-point arithmetic in software (such as the FLIP∗ library), or to rewrite the pro-grams to rely on fixed-point arithmetic [2]. Both approaches require the designof fixed-point routines, which appears to be a tedious and error prone task,especially since it is partly done by hand. Thus, one of the current challengesis to design automatic tools to generate fixed-point programs as fast as possiblewhile satisfying some accuracy constraints. In this sense, we have developedthe CGPE† software tool, dedicated to the generation of fast and certified codesfor evaluating bivariate polynomials in fixed-point arithmetic. This tool, basedon the generation of several fast evaluation codes combined with a systematicnumerical verification step, is well suited for VLIW integer processors using onlybinary adders and multipliers. We propose here an extension of CGPE, whichconsists in adding a step based on instruction selection [4, §8.9] to improve thespeed and the accuracy of the generated codes for more advanced architectures.

Given an instruction set architecture, instruction selection is the compilationprocess that aims at finding a sequence of instructions implementing “at best”a given program. It works on a target-independent intermediate representationof this program, represented as a tree or a directly acyclic graph (DAG), and

∗Floating-point Library for Integer Processors (see http://flip.gforge.inria.fr).†Code Generation for Polynomial Evaluation (see http://cgpe.gforge.inria.fr and [3]).

115

is usually used to optimize the code size or latency on the target architecture,while no guarantee is provided concerning the accuracy of the generated code.The general problem of instruction selection has been well studied and, eventhough it has been proven to be NP-complete even for simple machines in thecase of DAGs [5], several algorithms exist to tackle this problem (see [5] and thereferences therein).

In the context of CGPE, where we represent polynomial evaluation expres-sions with DAGs, we can benefit from this work on instruction selection bycombining it with the numerical verification step already implemented. The in-terest of our new approach is twofold. First it is much more flexible than writinga generation algorithm for each available processor. Indeed, it mainly needs towork on the DAG representation of the expression to be implemented, whichis independent of the target architecture, and thus it makes easier to handlevarious architectures shipping different kind of instructions. Second it allowsus to generate automatically codes optimized for a given target and satisfyingvarious criteria like accuracy and performance, as well as code size or numberof operators. This approach has been validated on the evaluation of polyno-mials, where it allows us to write efficient codes using at best some advancedarchitecture features such as the presence of a fused-multiply-add operator.

References:

[1] IEEE Standard 754-2008 – IEEE Standard for Floating-Point Arithmetic,2008.

[2] D. Menard, D. Chillet, and O. Sentieys, Floating-to-fixed-pointconversion for digital signal processors, In EURASIP Journal on AppliedSignal Processing, 2006, pp. 1–15.

[3] C. Mouilleron and G. Revy, Automatic generation of fast and certi-fied code for polynomial evaluation, In Proc. of the 20th IEEE Symposiumon Computer Arithmetic (E. Antelo, D. Hough, and P. Ienne, editors),IEEE, 2011, pp. 233–242.

[4] A.V. Aho, R. Sethi, and J.D. Ullman, Compilers: Principles, Tech-niques, and Tools, Addison-Wesley, Boston, 1986.

[5] D.R. Koes and S.C. Goldstein, Near-optimal instruction selection onDAGs, In Proc. of the 6th IEEE/ACM international Symposium on CodeGeneration and Optimization, ACM, New York, 2008, pp. 45–54.

116

JInterval library: principles,

development, and perspectives

Dmitry Nadezhin1 and Sergei Zhilin2

1Oracle Labs, Zelenograd, Russia2Altai State University, 61, Lenin ave., 6560049, Barnaul, Russia

[emailprotected], [emailprotected]

Keywords: interval computations, Java, library

JInterval [1] was started in 2008 as a research project to develop a Javalibrary for interval computations. The library is intended mainly for developerswho create Java-based applied software. The design of the JInterval librarywas guided by the following requirements ordered by descending priority:

1. The library must be clear and easy to use. No matter how wonderfula software tool is, it will be hardly accepted by developers if it is not transparentand easy to use.

2. The library should provide flexibility in the choice of interval arithmetic

for computations. The user must be able to choose interval arithmetic (classi-cal, Kaucher, complex rectangular, complex circular, etc.) and to switch onearithmetic to another if they are compatible. Syntactic differences between theuse of this or that arithmetic should be minimized.

3. The library should provide flexibility in extending its functionality. Thelibrary must be layered functionally. Four layers should be defined: intervalarithmetic operators, elementary interval functions, interval vector and matrixoperations, and, finally, high-level interval methods, such as solvers of equa-tions, optimization procedures, etc. Architecture of the library must allow forextensions at every layer, starting from the bottom one.

4. The library should provide flexibility in choosing precision of interval

endpoints and associated rounding policies. The choice of interval endpointsrepresentation and the rounding mode could allow the user to tune accuracyand speed of computation depending on the problem he solves.

5. The library must be portable. Cross-platform portability of the libraryis one of its major strengths, being a key distinction over its closest competi-tors. To a large extent, this requirement is ensured by the choice of the Javatechnology built on the principle ”write once, run anywhere”. However, thedesign must adhere to certain restrictions on practical implementation of thisrequirement.

117

6. The library should provide high performance. In the era of multicore andmultiprocessor systems, a prerequisite for high performance is the ability to usethe library safely in a multithreaded environment.

Achieving the required flexibility leads to widening the scope of the library,which results in a vast and obscure design, contrary to the simplicity require-ment. To avoid this contradiction, and to preserve clarity of the library, theoverall architecture needs to be transparent and consistent. This is done due toappropriate design decisions. Methods for interval classes, regardless of intervalarithmetic and of the internal representation of intervals are unified. Inter-vals are considered as immutable objects. The user is provided with a simpleinterface to manage rounding policy and interval endpoints representation.

At the moment, JInterval provides a user with several interval arithmetics(classical real, extended Kaucher, complex rectangular, complex circular, com-plex sector, complex ring), interval elementary functions, interval vector andmatrix operations, as well as a few methods for inner and outer estimation ofthe solution sets to interval linear systems.

A number of applications have been built using JInterval library. Acollection of plugins is developed for the data mining platform KNIME. Thecollection include interval regression builder, outlier detector, ILS solver, etc.Another example is mobile applications, where JInterval is used for positionuncertainty modeling in hybrid navigation.

The experience of JInterval implementation and usage taught us severallessons, and further development of JInterval will be governed by the followingprinciples:

1. Java language has a lot of advantages, but its syntax is not expressiveenough for computational programming. Scala language (fully compliant withJVM) is considered as a basic language for a new JInterval implementation.

2. Presently, JInterval is not compliant with the project of interval arith-metic standard IEEE P1788. A new implementation will be adjusted for P1788.

3. To achieve high performance, JInterval will be equipped (using JavaNative Interface) with optional plugins for machine-dependent implementationof high precision arithmetic and interval linear algebra algorithms.

4. For applied software developers, a rich content of the fourth layer ofthe library (high-level interval analysis methods) is one of the most valuableissues. Therefore the replenishment of JInterval with solvers of algebraic anddifferential equations, interval optimizers, etc., remains the foreground task.

References:

[1] Java Library for Interval Computations, http://jinterval.kenai.com.

118

Verified integration of ODEs

with Taylor models

Markus Neher

Karlsruhe Institute of TechnologyInstitute for Applied and Numerical Mathematics

Kaiserstr. 89-9376049 Karlsruhe, [emailprotected]

Keywords: ODEs, initial value problems, Taylor models

Verified integration methods for ODEs are methods that compute rigorousbounds for some specific solution or for the flow of some initial set of a givenODE. For almost fifty years, interval arithmetic has been used for calculatingbounds for solutions of initial value problems. The origin of these methods datesback to Moore [5]. The most well-known interval method is the QR method dueto Lohner [2], implemented in the AWA software package.

Unfortunately, interval methods sometimes suffer from overestimation. Pes-simistic bounds are caused by the dependency problem, that is the lack ofinterval arithmetic to identify different occurrences of the same variable, and bythe wrapping effect, which occurs when intermediate results of a calculation areenclosed into intervals.

Overestimation is a particular concern in the verified solution of initial valueproblems for ODEs. While it may sometimes be possible to reduce dependencyby skillful reformulation of the given equations or by evaluating all functionexpressions by centered forms, the wrapping effect is more difficult to prevent.Interval methods usually compute enclosures of the flow at intermediate timesteps of the integration domain. When the flow is a nonconvex set and isbounded by some convex interval, overestimation is inevitable.

For improving bounds, Taylor models have been developed as a combinationof symbolic and interval computations by Berz and his group since the 1990s.In Taylor model methods, the basic data type is not a single interval, but aTaylor model U := pn + i consisting of a multivariate polynomial pn of order nand some remainder interval i. In computations that involve U , the polynomialpart is propagated by symbolic calculations where possible, and is thus hardlyaffected by the dependency problem or the wrapping effect. Only the interval

119

remainder term and polynomial terms of order higher than n, which are usuallysmall, are bounded using interval arithmetic.

Besides reducing dependency, Taylor model methods for ODEs also benefitfrom their capability to represent non-convex sets. This is an intrinsic advantageover interval methods for enclosing the flows of nonlinear ODEs, especially incombination with large initial sets or with large integration domains [1, 3, 4, 6].

In our talk, we analyze Taylor model methods for the verified integration ofODEs and compare these methods with interval methods.

References:

[1] M. Berz, K. Makino, Suppression of the wrapping effect by Taylormodel-based verified integrators: Long-term stabilization by shrink wrap-ping, Int. J. Diff. Eq. Appl., 10 (2005), pp. 385–403.

[2] R. Lohner, Enclosing the solutions of ordinary initial- and boundary-value problems, In Computer arithmetic: Scientific Computation and Pro-gramming Languages (E. Kaucher, U. Kulisch, Ch. Ullrich, eds), Teubner,Stuttgart, 1987, pp. 255–286.

[3] K. Makino, M. Berz, Suppression of the wrapping effect by Taylormodel-based verified integrators: Long-term stabilization by precondition-ing, Int. J. Diff. Eq. Appl., 10 (2005), pp. 353–384.

[4] K. Makino, M. Berz, Suppression of the wrapping effect by Taylormodel-based verified integrators: The single step, Int. J. Pure Appl. Math.,36 (2006), pp. 175–197.

[5] R.E. Moore, Interval Analysis, Prentice Hall, Englewood Cliffs, N.J.,1966.

[6] M. Neher, K.R. Jackson, N.S. Nedialkov, On Taylor model basedintegration of ODEs, SIAM J. Numer. Anal., 45 (2007), pp. 236–262.

120

Searching solutions to the interval

multi-criteria linear programming

problem

Sergey I. Noskov

Irkutsk University of Railway Communications15, Chernyshevskogo str., 664074 Irkutsk, Russia

noskov [emailprotected]

Keywords: interval, multi-criteria task, linear programming

A multi-criteria linear programming problem (MLP) is one of the classicalproblem statements in the theory of decision making, and its formulation hasthe form:

Cx→ maxx∈X

, X = x ∈ Rn | Ax ≤ b, x ≥ 0. (1)

Here, in contrast to the usual linear programming problem (LP), C is amatrix with the dimension l × n, and not a vector, A is a constraint matrixwith the dimension m × n. Thus, the multi-criteria problem (1) involves themaximization, on a polyhedron, l linear criteria at the same time, as distinctfrom the ordinary LP problem. Note that the normal form of (1) can be eas-ily transformed to the canonical from. The constraint x ≥ 0 is also easy toimplement.

As a rule, the traditional solution of the problem (1) does not exist, that is,the point x ∈ X , such that Cx ≥ Cy for all y ∈ X and y 6= x, is absent. In thecase where the decision maker (DM) does not have a priori information on therelative importance of various criteria, the solution of (1) is understood as theso-called Pareto set. Denote it as N ⊂ X . The solution x ∈ N is called Paretosolution (non-dominated, unimprovable), if it can not be improved with respectto any criterion without worsening the value of at least one of the remainingcriterion. Or, formally,

x ∈ N ⇐⇒ (∀y ∈ X, y 6= x)¬((Cy ≥ Cx) ∧ (∃i Ciy > Cix)),

where Ci is the i-th row (i-th criterion) of the matrix C.The problem of constructing the Pareto set in the MLP problem has been

extensively covered in the literature, and one of the best publications on the

121

subject is the article of P.L. Yu and M. Zeleny [1]. They have derived and the-oretically substantiated a number of methods for constructing Pareto sets ofvertices Nex ⊂ N and the whole Pareto set. In particular, the so-called multi-criteria simplex method is developed in [1] for the construction of the set Nex.It is based on a fundamental theorem whose formulation is given below.

Theorem [1]. The set Nex is connected, x0 ∈ N ⇔ ω = 0; x0 ∈ D ⇔ ω > 0.Here, D = X\N , and ω is a solution of the LP problem

ω = max

l∑

i=1

ei, X = (x, l) ∈ Rn+l | x ∈ X, Cx− e ≥ Cx0, e ≥ 0. (2)

The essence of the algorithm for constructing the set Nex that is describedin [1], is as follows. We start from searching the first Pareto vertex x1. To dothis, it is sufficient to solve the LP problem with the objective function

l∑

i=1

λiCix→ max

x∈X, λ > 0.

After that, all the neighbouring vertices for point x1 are being checked to bePareto ones by solving the problem (2). Those who really prove to be Paretovertices, are included in Nex, then we test their adjacent vertices, etc.

It should be noted that in [1] shows (see, for example, Theorem 3.1 in [1]), aset of simple sufficient conditions for belonging to some arbitrary point y ∈ Xset D, which greatly facilitates the search. Note that in [2] is a simple way tospot the characterization of the Pareto set N .

We now pose the problem (1) somewhat differently, namely, we assume thatboth the constraint matrix, right-hand side and the criterion matrix are interval(the scalar formulation has long been solved by various approaches). The aboveraises a number of natural questions. Is the problem formulation correct? If so,what is meant by Pareto solution and Pareto vertex in this case? What will bemulti-criteria simplex method? And a number of extremely interesting relatedquestions.

References:

[1] L. Yu, M. Zeleny, The set of all nondominated solutions in linear cases andmultycriteria simplex method, J. of Math. Anal. and Applic., 45 (1975), No. 2,pp. 430–468.

[2] S.I. Noskov, The problem of uniqueness of Pareto-optimal solutions in theproblem of linear programming with a vector criterion function, Modern tech-nologies. System analysis. The simulation, Special issue, 2011, pp. 283–285.

122

Verified solutions

of sparse linear systems

Takeshi Ogita

Division of Mathematical Sciences, Tokyo Woman’s Christian University2–6–1 Zempukuji, Suginami-ku, Tokyo 167–8585, Japan

[emailprotected]

Keywords: sparse linear systems, floating-point arithmetic, verified numer-ical computations

To solve linear systems is ubiquitous since it is one of the basic and significanttasks in scientific computing. Floating-point arithmetic is widely used for thispurpose. Since it uses finite precision arithmetic and numbers, rounding errorsare included in computed results. To guarantee the accuracy of the results,there are methods so-called verified numerical computations based on intervalarithmetic. Excellent overviews can be found in [6] and references cited therein.

Let A be a real n × n matrix, and b a real n-vector. Let κ(A) = ‖A‖2 ·‖A−1‖2 be the condition number of A, where ‖ ·‖2 stands for the spectral norm.Throughout the talk we assume for simplicity that IEEE standard 754 binary64(formerly, double precision) floating-point arithmetic is used. Let u denote therounding error unit of floating-point arithmetic, which is equal to 2−53.

We are concerned with practically proving the nonsingularity of A (if Ais nonsingular) and then obtaining a forward error bound of an approximatesolution x of a linear system Ax = b to the exact solution x∗ = A−1b such that|x∗i − xi| ≤ ǫi for 1 ≤ i ≤ n by the use of verified numerical computations. Forthis purpose estimating ‖A−1‖ is essential for some matrix norm.

For dense linear systems there are several efficient methods for this purpose(e.g. [1,4]). For sparse systems things are much different; Fast and efficientverification for large sparse linear systems is still difficult in terms of both com-putational complexity and memory requirements except a few cases where itis known in advance or to be proved that A belongs to a certain special ma-trix class, e.g. diagonally dominant and M -matrix (see, e.g. [3]). Moreover, asuper-fast verification method proposed in [7] is applied to the case where A issparse, symmetric and positive definite. However, to our knowledge, few meth-ods are known in case of A being a general sparse matrix except methods byRump [5]. Thus the verification for sparse systems of linear (interval) equa-tions is known as one of the important open problems posed by Neumaier in

123

Grand Challenges and Scientific Standards in Interval Analysis [2]. Moreover,Rump [6] formulated the following challenge:

Derive a verification algorithm which computes an inclusion of thesolution of a linear system with a general symmetric sparse matrixof dimension 10000 with condition number 1010 in IEEE 754 doubleprecision, and which is no more than 10 times slower than the bestnumerical algorithm for that problem.

In the present talk we try to partially solve the problem for symmetric butnot necessarily positive definite input matrices, and also to a certain extent fornonsymmetric matrices. Namely, we assume that A is large, e.g. n ≥ 10000,and sparse, possibly κ(A) > 1/

√u.

We survey some existing verification methods for sparse linear systems. Afterthat, we propose new verification methods. Numerical results are also presented.

References:

[1] A. Neumaier, A simple derivation of the Hansen-Bliek-Rohn-Ning-Kear-fott enclosure for linear interval equations, Reliable Computing, 5 (1999),pp. 131–136, and Erratum, Reliable Computing, 6 (2000), p. 227.

[2] A. Neumaier, Grand challenges and scientific standards in interval ana-lysis, Reliable Computing, 8 (2002), pp. 313–320.

[3] T. Ogita, S. Oishi, Y. Ushiro, Fast verification of solutions for sparsemonotone matrix equations, Computing Suppl., 15 (2001), pp. 175–187.

[4] S. Oishi, S.M. Rump, Fast verification of solutions of matrix equations,Numer. Math., 90 (2002), No. 4, pp. 755–773.

[5] S.M. Rump, Validated solution of large linear systems, Computing Suppl.,9 (1993), pp. 191–212.

[6] S.M. Rump, Verification methods: Rigorous results using floating-pointarithmetic, Acta Numerica, 19 (2010), pp. 287–449.

[7] S.M. Rump, T. Ogita, Super-fast validated solution of linear systems,J. Comp. Appl. Math., 199, No. 2 (15 February 2007), pp. 199–206.

124

Error estimates with explicit constants

for Sinc quadrature and Sinc indefinite

integration over infinite intervals

Tomoaki Okayama

Graduate School of Economics, Hitotsubashi University2-1 Naka, Kunitachi, Tokyo 186-8601, Japan

[emailprotected]

Keywords: Sinc numerical methods, infinite interval, error estimates

The Sinc quadrature has been known as an efficient numerical integration

formula for definite integrals,∫ b

a f(x) dx, if the following conditions are met: (i)(a, b) = (−∞, ∞), and (ii) |f(x)| decays exponentially as x → ±∞. In othercases, users should employ an appropriate variable transformation x = ψ(t),

i.e., the given integral is transformed as∫ b

a f(x) dx =∫∞−∞ f(ψ(t))ψ′(t)dt, so

that those two conditions are met. Stenger [2] considered the following cases:

1. (a, b) = (−∞, ∞), and |f(x)| decays algebraically as x→ ±∞,2. (a, b) = (0, ∞), and |f(x)| decays algebraically as x→∞,3. (a, b) = (0, ∞), and |f(x)| decays (already) exponentially as x→∞,4. The interval (a, b) is finite,

and gave the concrete transformations for each case:

ψSE1(t) = sinh(t),

ψSE2(t) = et,

ψSE3(t) = arcsinh(et),

ψSE4(t) =b− a

2tanh(t/2) +

b+ a

2,

which are called the Single-Exponential (SE) transformations. Takahasi–Mori [3]have proposed the following improved transformations:

ψDE1(t) = sinh[(π/2) sinh t],

ψDE2(t) = e(π/2) sinh t,

ψDE3(t) = et−exp(−t),

ψDE4(t) =b− a

2tanh(π sinh(t)/2) +

b+ a

2,

125

which are called the Double-Exponential (DE) transformations. Error analysesof them have been given [2,4] in the following form:

|Error(SE)| ≤ Ce−√2πdαN , |Error(DE)| ≤ Ce−πdN/ log(8dN/α),

where α denotes the decay rate of the integrand, and d indicates the width ofthe domain in which the transformed integrand is analytic, and C is a constantindependent of N . In view of the inequalities, we notice that the accuracy ofthe approximation can be guaranteed if the constant C is explicitly given ina computable form. In fact, the explicit form of C has been revealed in thecase 4 (the interval is finite) [1], and the result was used for verified automaticintegration [5].

The purpose of this study is to reveal the explicit form of C’s in the remain-ing cases: 1–3 (the interval is infinite), which enables us to bound the errors.Numerical experiments that confirm the results will be shown in this talk.

In addition to the Sinc quadrature described above, the similar results can

be given for the Sinc indefinite integration for indefinite integrals∫ ξ

af(x) dx,

which will also be reported in this talk. For this (indefinite) case, a new variabletransformation ψDE3‡(t) = log(1+eπ sinh t) is proposed for the DE transformationin the case 3, so that its inverse can be written with elementary functions.

References:

[1] T. Okayama, T. Matsuo and M. Sugihara, Error estimates with ex-plicit constants for Sinc approximation, Sinc quadrature and Sinc indef-inite integration, Mathematical Engineering Technical Reports 2009-01,The University of Tokyo, 2009.

[2] F. Stenger, Numerical Methods Based on Sinc and Analytic Functions,Springer-Verlag, New York, 1993.

[3] H. Takahasi and M. Mori, Double exponential formulas for numeri-cal integration, Publications of the Research Institute for MathematicalSciences, Kyoto University, 9 (1974), pp. 721–741.

[4] K. Tanaka, M. Sugihara, K. Murota and M. Mori, Function classesfor double exponential integration formulas, Numerische Mathematik, 111(2009), pp. 631–655.

[5] N. Yamanaka, T. Okayama, S. Oishi and T. Ogita, A fast verifiedautomatic integration algorithm using double exponential formula, Non-linear Theory and Its Applications, IEICE, 1 (2010), pp. 119–132.

126

On methodological foundations

of interval analysis

of empirical dependencies

Nikolay Oskorbin and Sergei Zhilin

Altai State University, 61, Lenin ave., 6560049, Barnaul, [emailprotected], [emailprotected]

Keywords: methodology, experimental data processing, interval observa-tions, inconsistent data

We consider methodological issues of the usage of interval analysis as amethod for mathematical modeling of real-world processes and experimentaldata processing.

Let for a process described by a linear dependence y = xβ∗ with outputvariable y ∈ R, input variables x ∈ Rp and unknown true values of parame-ters β∗ ∈ Rp, we have a set of interval observations (Yj,Xj) | j = 1, . . . , N.The problem of estimation of the process parameters is reduced to finding theunited solution set B(N) of the interval linear system Y = XB. The set of pos-sible parameters values B(N) is also called the information set. If underlyingassumptions about the structure of dependence and validity of interval observa-tions are strongly fulfiled, the inclusion β∗ ∈ B(N) 6= ∅ holds. This inclusion isa fundamental foundation of reliability of the constructed parameters estimates.

This interval approach to modeling of processes is developed by a numberof authors and competes with probabilistic approach on efficiency of estimatesin a number of applications. Using the interval approach benefits the simplicityand reliability of data and knowledge, flexibility in employment of a priori infor-mation, possibility of state estimation, forecasting and choosing control actionsfor a modeled process. There are applications of the interval approach to themodeling of nonlinear processes and processes with an inner noise.

Essential difficulties arise in a case when we are not sure about the underlyingassumptions of the method, and hence there is no good cause to state β∗ ∈ B(N)even if B(N) 6= ∅. The fulfilment of assumptions cannot be verified using onlyexisting data and knowledge. An analogous problem situation often takes placewhen statistical probabilistic methods are used for data analysis.

When looking for a way out of the situation, it is necessary to take intoaccount the following principles.

127

1. It is impossible to obtain reliable estimates of process parameters usingan inconsistent set of data and knowledge about the process.

2. None of the inner mathematical needs can be a ground for any kind ofmodifications of analyzed data and knowledge.

In the authors’ opinion, the methodologically correct way out of the impasseinvolves a discovering of inconsistencies in the data and knowledge and theirelimination after appraisal by application domain experts.

The proposed way is implemented in a case when B(N) = ∅ [1–3]. Wideningof some or, in general, all interval variables allows us to obtain an informationset B(N, k) ⊃ B(N) which is determined by a set k of expansion coefficientsfor interval variables. The expanded set B(N, k) is formed by elementary infor-mation portions Bj(N, k), B(N, k) = ∩Nj=1Bj(N, k). Choosing k we can obtainB(N, k) 6= ∅ and detect portions which need domain expert’s appraisal.

Besides, to discover inconsistencies one can

• estimate an informational value of each portion of data and knowledgeagainst the selected basic set;

• relate the volume of B(N, k) to the value of N ;

• investigate the dynamics of the volume of B(N, k) depending on N .

Implementation of the proposed approach demands on the development ofsuitable mathematical tools and accumulation of experience in specific applica-tions. We show model and real-world case studies to illustrate the approach.

The authors wish to express their gratitude to Professor S.P. Shary for hisinitiative to prepare this talk.

References:

[1] N.M. Oskorbin, A.V. Maksimov, S.I. Zhilin, Construction and anal-ysis of empirical dependencies using uncertainty center method, Transac-tions of Altai State University, 1 (1998), pp. 35–38, (In Russian).

[2] S.I. Zhilin, Simple method for outlier detection in fitting experimentaldata under interval error, Chemometrics and Intellectual Laboratory Sys-tems, 88 (2007), No. 1, pp. 60–68.

[3] S.P. Shary, Solvability of interval linear equations and data analysisunder uncertainty Automation and Remote Control, 73 (2012), No. 2,pp. 310–322.

128

Performance comparison of accurate

matrix multiplication

Katsuhisa Ozaki1 and Takeshi Ogita2

1Shibaura Institute of Technology307 Fukasaku, Minumaku, Saitama-shi, Saitama 337-8570, Japan

2Tokyo Woman’s Christian University2-6-1 Zempukuji, Suginami-ku, Tokyo 167-8585, Japan

[emailprotected]

Keywords: accurate computations, matrix multiplication

This talk discusses accurate numerical algorithms for matrix multiplication.Accurate matrix multiplication is useful for verified numerical computations,especially for verified solutions of systems of equations including proofs of matrixnon-singularity and Krawczyk’s method (See, for example, Chapter 4 in [1],Section 4 in [2] and Section 10 in [5]). Let A be an m-by-n matrix and B bean n-by-p matrix with floating-point entries as defined by the IEEE 754-2008standard, respectively. If the matrix multiplication AB is evaluated by floating-point arithmetic, then an inaccurate result may be obtained due to accumulationof rounding errors. The aim is to develop an algorithm outputting a computedresult C such that

|C −AB| ≤ u|AB|, (1)

where u is the relative rounding error unit, for example, u = 2−53 for binary64.The inequality implies that C is as accurate as if AB were first evaluated exactlyand the results were rounded to the nearest floating-point numbers component-wise. To achieve (1), a simple way is to apply an accurate summation algorithm,such as the algorithm proposed by Rump-Ogita-Oishi in [3], for each dot prod-uct in matrix multiplication since a dot product can be transformed into asum of 2n floating-point numbers by so-called error-free transformations. Theabove-mentioned algorithm is called Algorithm-A in this abstract.

Recently, an error-free transformation of matrix multiplication [4] is devel-oped by the authors. It transforms a product of two floating-point matrices intoan unevaluated sum of floating-point matrices, namely

AB =

q∑

i=1

C(i), q ∈ N,

129

where each C(i) is anm-by-p floating-point matrix. By using this transformationand the accurate summation algorithm given in [3], it is possible to develop analgorithm which achieves (1). Namely, the error-free transformation is firstapplied. Next, the accurate summation algorithm [3] is applied to the sum ofmatrices componentwise. The above-mentioned algorithm is called Algorithm-Bin this abstract.

First, we compare computational performance and efficiency of paralleliza-tion of the two algorithms by numerical examples. If there is not much differencein the order of magnitude among elements in the same row of A and those inthe same column of B, then Algorithm-B is much faster than Algorithm-A.Otherwise, Algorithm-A is faster than Algorithm-B.

A drawback of Algorithm-B is to require a large amount of working mem-ory. To overcome this problem, we develop a new algorithm which reduces theamount of working memory by block matrix computations and reuse of work-ing memory. We incorporate these approaches into Algorithm-B. The proposedalgorithm is called Algorithm-C in this abstract. It is shown by numerical ex-amples that such approaches for saving working memory are efficient and donot slow down the computational performance significantly. For example, ifthe required working memory for Algorithm-C is reduced into 1/5 of that forAlgorithm-B, then Algorithm-C is only 20 % slower than Algorithm-B in thenumerical examples.

References:

[1] A. Neumaier, Interval Methods for Systems of Equations, CambridgeUniv. Press, Cambridge 1990.

[2] A. Frommer, B.Hashemi, Verified error bounds for solutions of Sylvestermatrix equations, Linear Algebra and Its Applications, 436, No. 2 (2012),pp. 405–420.

[3] S. M. Rump, T. Ogita, S. Oishi, Accurate Floating-Point SummationPart II: Sign, K-fold Faithful and Rounding to Nearest, SIAM J. Sci.Comput., 31, No. 2 (2008), pp. 1269–1302.

[4] K. Ozaki, T. Ogita, S. Oishi, S. M. Rump, Error-free transformationof matrix multiplication by using fast routines of matrix multiplicationand its applications, Numerical Algorithms, 59, No. 1 (2012), pp. 95–118.

[5] S. M. Rump, Verification methods: Rigorous results using floating-pointarithmetic, Acta Numerica, 19 (2010), pp. 287–449.

130

Interval methods for global

unconstrained optimization:

a software package

Valentin N. Panovskiy

Moscow Aviation Institute NRU4, Volokolamskoe shosse125993 Moscow, Russia

[emailprotected]

Keywords: interval arithmetic, range dichotomy, cutoff of virtual values,stochastic cutoff, changing directions, unconstrained global optimization

This work is devoted to the interval methods for the solution of uncon-strained global optimization problems. Use of interval analysis as a base com-ponent of the methods gives essential advantages (e.g., less requirements to theproblem statement [1]).

Method of range dichotomy and cutoff methods use the approach previouslyelaborated in [4]. The feature of our technique is that it does not use subdivisionof the function domain, and only divided the range of values. According to theterminology from [4], all the components of the domain are “mute” in thiscase. Our methods use a construction called invertor that requires, on input,the objective function, a target interval, a box and an accuracy parameter. Theinvertor returns a set of boxes on which the objective function returns an intervalwhich belongs to the target interval or has non-empty intersection with it (inthis case, the width of the box must satisfy an accuracy constraint).

On the first step of the method of range dichotomy, we evaluate the range ofthe function over the search area and consider it as the target interval. Furtheron each iteration the target interval is bisected. Then we apply the invertor tothe first part. If the invertor returns a nonempty set, we check the accuracycondition. In case of its failure, the first part is considered as a new targetinterval, and a new iteration of the method starts. Otherwise, the returned sethas a box that contains the global minima of function. If the invertor returnsan empty set, the second part is considered as a new target interval, and themethod begins new iteration.

Strategies for the cutoff methods are similar to the strategy for the rangedichotomy method. The only distinction is the presence of a tightening stage,

131

which comes before the iterative part. After the range evaluation, we applythe operator of compression to it, which deletes a part of virtual values fromthe evaluation. Then the method works as the previous method. The stagedescribed tries to reduce the target interval. This accelerates convergence.

Strategy of the changing directions method consists of constant analysisof the best box and all the potential best boxes stored in the memory. Toexplain the work of the method, it is necessary to introduce the concept of thedouble buffer. We will consider the double buffer as a set of the ordered pairs“box”x“enclosure of the range”. One box is considered better than another,if it has a smaller lower boundary and width of its range enclosure. On eachiteration, the best box, which becomes target box, is selected from the doublebuffer. Further, the target box is divided at its midpoint. Then we evaluate therange of newly organized boxes and restructure the double buffer. This methodworks while there is at least one box in the double buffer that do not meet theaccuracy requirement.

The methods were not only theoretically substantiated and have their con-vergence proved. Besides, they were tested on benchmarks of unconstrainedglobal optimization problems (global minimization of the Schwefel’s, Griewank’s,Ackley’s functions, etc.). All the methods compute the box which contains thepoint of global minima or is close enough to it according to accuracy specifica-tion.

The methods have been implemented, using C#, as a software package thatcan solve unconstrained global optimization problem and automatically analyzethe efficiency of the methods.

References:

[1] L. Jaulin, M. Kieffer, Applied Interval Analysis, Springer-Verlag, Lon-don, 2001.

[2] R.E. Moore, R.B. Kearfott, Introduction to Interval Analysis, SIAM,Philadelphia, 2009.

[3] S.P. Shary, Finite-dimensional Interval Analysis, Novosibirsk, XYZ, 2012.(in Russian)

[4] S.P. Shary, A surprising approach in interval global optimization, Reli-able Computing, 7 (2001), No. 6, pp. 497–505.

[5] V.N. Panovskiy, Application of the interval analysis for the search ofthe global extremum of functions, Trudy MAI, No. 51 (2012), http://www.mai.ru/science/trudy/published.php?eng=Y&ID=28953

132

Application of redundant positional

notations for increasing arithmetic

algorithms scalability

Anatoly V. Panyukov

South Ural State University76, Lenin ave., 454080 Chelyabinsk, Russia

a [emailprotected]

Keywords: integer arithmetic, positional notation, hybrid scalability

For algorithmic analysis of large scale unstable problems (considered e.g.in [1]), the library “Exact computation” [2–4] provides helpful instruments fordistributed computing environment. Further increasing of effectiveness of suchsoftware is possible for heterogeneous computing environment that allows oneto parallelize execution of local arithmetic operations over a large number ofthreads. Application of redundant positional notations is also an effective ap-proach for increasing arithmetic algorithms scalability.

References:

[1] A.V. Panyukov, V.A. Golodov, Calculating of pseudo-solution of lin-ear equation systems with interval uncertainty of coefficients, AlgorithmicAnalysis of Unstable Problems: Abstracts of the International Conference,2011, October 31 – November 5, Yekaterinburg, Institute of Mathematicsand Mechanics of Ural Branch of Russian Academy of Sciences, 2011,pp. 262–263 (in Russian).

[2] A.V. Panyukov, V.V. Gorbik, Using massively parallel computationsfor absolutely precise solution of the linear programming problems, Au-tomation and Remote Control, 73 (2012), No. 2, pp. 276–290.

[3] A.V. Panyukov, V.V. Gorbik, Exact and guaranteed accuracy solu-tions of linear programming problems by distributed computer systemswith MPI, Tambov University Reports. Series: Natural and TechnicalSciences, 15 (2010), No. 4, pp. 1392–1404.

[4] V.A. Golodov, Distributed symbolic rational calculation on x86 andx64 CPUs, Proceedings of International Conference “Parallel ComputingTechnologies”, Novosibirsk, March 26 – March 30, 2012), South Ural StateUniversity Press, Chelyabinsk, p. 774 (in Russian).

133

Computing the best possible

pseudo-solutions to interval linear

systems of equations

Anatoly V. Panyukov1 and Valentin A. Golodov2

South Ural State University76, Lenin ave., 454080 Chelyabinsk, Russia

1a [emailprotected], [emailprotected]

Keywords: interval uncertainty, interval linear equations, linear program-ming, massively parallel computations, pseudo-solution, tolerable solution set

We consider solution of the linear equations set Ax = b under interval uncer-tainty of its elements, which can belong to the interval n×n-matrix A and inter-val right-hand side n-vector b. That is, we only know that aij ∈ aij = [aij , aij ]

and bi ∈ bi = [bi, bi] for all i, j = 1, 2, . . . , n.As a solution to the disturbed linear systems, we consider a point from the

tolerable solution set Ξtol(A, b) = x ∈ Rn | (∀A ∈ A)(Ax ∈ b). A substantialcontribution to the theory of the tolerable solution set and tolerance problemhas been made by J. Rohn [1] and S. Shary [2].

For real-life situations, we often have Ξtol(A, b) = ∅. By parity of reasoning,we introduce pseudo-solution concept [3] for interval linear equation systems.Let b(z) = [ b−z|b|, b+z|b| ], z > 0, then we denote z∗ = infz | Ξtol(A, b(z)) 6=∅. Pseudo-solution of the original system Ax = b, by definition, is an innerpoint of tolerable solution set Ξtol(A, b(z∗)). Extending Rohn’s representationof the tolerable solution set [1], we deduce

Theorem 3 A solution x+∗

, x−∗ ∈ Rn, z∗ ∈ R to the linear programming

problem

z → minx+, x−, z

,

n∑

j=1

(aijx+j − aijx−j ) ≥ bi − z|bi|, i = 1, 2, . . . , n,

n∑

j=1

(aijx+j − aijx−j ) ≤ bi + z|bi|, i = 1, 2, . . . , n,

x+j , x−j , z ≥ 0, j = 1, 2, . . . , n,

(1)

134

exists, and x∗ = x+∗ − x−

∗

is a pseudo-solution to the linear equations setAx = b.

Linear programming problem (1) is strongly degenerate, and solving it withthe use of the standard floating point data types is impossible, since cycling isnot efficiently eliminated by known anticycling tools under approximate com-putations. The cycling and accuracy problems can be solved by using symbolicrational-fractional computations [5]. To accelerate the computations, we mayavail ourselves of massively parallel computations [6].

In our talk, we discuss the solutions for a number of the above problems,present a new theory and techniques elaborated in the course of our research.

The work is performed under support of Russian foundation for basic re-search (project No 10-07-96003-r ural a).

References:

[1] J. Rohn, Inner solutions of linear interval systems, in Interval Mathemat-ics 1985, K. Nickel, ed., Springer Verlag, New York, 1986, pp. 157–158.

[2] S.P. Shary, Solving the linear interval tolerance problem, Mathematicsand Computers in Simulation, 39 (1995), pp. 53–85.

[3] V.Ya. Arsenin, On ill-posed problems, Russian Math. Surveys, 31 (1976),No. 6, pp. 93–107.

[4] A.V. Panyukov, V.A. Golodov, Calculating of pseudo-solution of lin-ear equation systems with interval uncertainty of coefficients, AlgorithmicAnalysis of Unstable Problems: Abstracts of the International Conference,October 31 – November 5, 2011, Yekaterinburg, Institute of Mathematicsand Mechanics of Ural Branch of Russian Academy of Sciences, 2011,pp. 262–263 (in Russian).

[5] V.A. Golodov, Distributed symbolic rational calculation on x86 andx64 CPUs, Proceedings of International Conference “Parallel ComputingTechnologies”, Novosibirsk, March 26 – March 30, 2012), South Ural StateUniversity Press, Chelyabinsk, p. 774 (in Russian).

[6] A.V. Panyukov, V.V. Gorbik, Using massively parallel computationsfor absolutely precise solution of the linear programming problems, Au-tomation and Remote Control, 73 (2012), No. 2, pp. 276–290.

135

Properties and estimations

of parametric AE-solution sets

Evgenija D. Popova

Institute of Mathematics and Informatics, BASAcad. G. Bonchev str., block 8

1113 Sofia, [emailprotected]

Keywords: linear systems, dependent data, AE-solution set, characteriza-tion, estimations, tolerable solution set, controllable solution set

Consider linear systems A(p)x = b(p), where the elements of the matrixand right-hand side vector are linear functions of uncertain parameters varyingwithin given intervals, pi ∈ [pi], i = 1, . . . , k. Such systems are common in manyengineering analysis or design problems, control engineering, robust Monte Carlosimulations, etc., where there are complicated dependencies between the modelparameters which are uncertain. Various solution sets to a parametric linearsystem can be defined depending on the way the parameters are quantifiedby the existential and/or the universal quantifiers. We are interested in theparametric AE-solution sets, which are defined by universally and existentiallyquantified parameters, and the former precede the latter. For two disjoint setsof indexes E and A, such that E ∪ A = 1, . . . , k,

ΣpAE = Σ(A(pA, pE), b(pA, pE), [p])

:= x ∈ Rn | (∀pA ∈ [pA])(∃pE ∈ [pE ])(A(p)x = b(p)).Parametric AE-solution sets generalize the parametric united solution set andthe nonparametric AE-solution sets.

In this talk we present three types of characterizations for the parametric AE-solution sets: set-theoretic characterization, characterization in form of intervalinclusions and characterization by Oettli-Prager-type absolute-value inequali-ties. The focus of the characterizations is on how to obtain explicit descriptionof a parametric AE-solution set in the form of Oettli-Prager-type inequalities.The description is explicit for some classes of parametric AE-solution sets andin the general case can be obtained by a Fourier-Motzkin-type algorithmic pro-cedure eliminating the existentially quantified parameters.

The characterizations of parametric AE-solution sets inspire proving variousproperties of these solution sets and designing some numerical methods for their

136

outer or inner estimation. We will present some important inclusion relationsbetween classes of parametric AE-solution sets, where the relations are deter-mined by the type of the parameter dependencies. Various other properties likethe shape of a parametric AE-solution set and some criteria for nonempty andbounded solution set will be also discussed. Special consideration is providedfor the parametric tolerable solution set

Σptol = Σ(A(pA), b(pE), [p])

:= x ∈ Rn | (∀pA ∈ [pA])(∃pE ∈ [pE ])(A(pA)x = b(pE))

and for the parametric controllable solution set

Σpcont = Σ(A(pE), b(pA), [p])

:= x ∈ Rn | (∀pA ∈ [pA])(∃pE ∈ [pE ])(A(pE )x = b(pA)).

Some numerical methods for outer and inner estimations of parametric AE-solution sets will be also presented. The properties of these methods for esti-mating the parametric tolerable and the parametric controllable solution setsare compared. We show that in some cases the parametric approach providesa more efficient solution for some nonparametric problems than the existingnonparametric approaches. Numerical examples accompanied by graphic repre-sentations will illustrate the solution sets and their properties or the numericalmethods and their properties. Some of the properties (or methods) are new,most of them generalize known properties (or methods) for nonparametric AE-solution sets, studied by I. Sharaya, S. Shary and others. Presenting some firstresults about parametric AE-solution sets, the talk will also outline some openproblems and directions of possible further research.

137

An interval approach to recognition

of numerical matrices

Alexander Prolubnikov

Omsk State University55-A, Mira ave.

644077 Omsk, [emailprotected]

Keywords: pattern recognition, outer estimation

In our work, we propose a new interval approach to recognition of numericalmatrices.

The registration of data by technical means is often complicated by mea-surement errors or noise that interfere the registration process. If it is knownthat the data presented in the form of a matrix are distorted by noise or regis-tered with errors from a given set of pattern matrices, a common problem is torecognize the patterns under specified constraints on the noise or measurementerrors.

An obvious example of the problem under consideration is recognition ofraster images. Existing algorithms of recognition of raster images, such as, forexample, those using neural networks [1], parametric algorithms, algorithmson the basis of the theory of morphological analysis [2] include a preliminarylearning stage, during which the algorithm should be taught from images ofthe object obtained under various registration conditions. The purpose of thelearning process is to fix some characteristics of the image, which can be usedfor subsequent recognition. The distinctive feature of the approach to the recog-nition we propose in our work from the traditional ones is the absence of thelearning stage within the recognition algorithm.

The problem is formulated as follows. We are given a set of N rectangular

m × n-matrices S = A(k)Nk=1 whose elements a(k)ij are real numbers. The

matrix A is obtained from some matrix A(k0) ∈ S in the course of noising. Itis known that the values of the elements of the matrices can vary within the

intervals [a(k)ij −δij, a

(k)ij +δij ], δij ∈ R+ (i = 1,m, j = 1, n). We need to identify

k0.We associate the input matrices with the systems of interval linear equations

of the formA(k)x = e,

138

where e = (1, . . . , 1)⊤ and A(k) is an interval matrix built for a pair of matricesA and A(k) (k = 1, N). Let Ξ(k) denote the united solution set for the k-th

linear system of equations. Lebesgue measure µ(Ξ(k)) of enclosures Ξ(k) of thesets Ξ(k) are used as recognition heuristics.

We consider specific procedures that construct the matrices A(k) and justifythe choice of the right-hand side vectors of the interval linear systems. Thematrices can be built so that they are amenable to usual interval numericalmethods. Specifically, the matrices A(k) may be done H-matrices by construc-tion, which makes it possible to use interval Gauss-Seidel method for enclosingtheir united solution set [3]. The total computational complexity of the proposedrecognition algorithm is estimated as O(d2), where d = maxm,n.

We present the results of computational experiments and comparison withthe other known approaches to the recognition problem. It is worthwhile tonote that our numerical experiments include the recognition of grayscale andmonochrome images.

References:

[1] H. Demuth, M. Beale, Neural Network Toolbox for Use with MATLAB.Users Guide. Version 4. Available at http://cs.mipt.ru/docs/comp/

eng/develop/software/matlab/nnet/main.pdf [Accessed April 15, 2012].

[2] E.A. Kirnos, A Comparative Analysis of Morphological Methods of ImageInterpretation, The dissertation of the candidate of phys.-math. sciences:05.13.18, Moscow, 2004.

[3] A. Neumaier, Interval Methods for Systems of Equations. CambridgeUniversity Press, Cambridge, 1990.

139

Maximizing stability degree of interval

systems using coefficient method

Maxim I. Pushkarev, Sergey A. Gaivoronsky

Tomsk Polytechnic [emailprotected]

Keywords: interval system, stability degree, coefficient method

The work is devoted to maximization of stability degree for linear systemshaving interval parameter uncertainty. We propose to solve this problem byso-called coefficient method, using a sufficient conditions for specified stabilitydegree η.

If we are given characteristic polynomial of a linear system, A(s) = ansn +

an−1sn−1 + . . . + a0, an > 0, then the stability degree conditions, with regard

to the controller tunings vector k, can be written as follows [1]:

λi(k, η

)=

ai−1(k)ai+2(k)(ai(k)− ai+1(k)(n− i− 1)η

) (ai+1(k)− ai+2(k)(n− i− 2)η

) ,

i = 1, n− 2;

fl(k, η

)= al(k)− al+1(k)(n− l − 1)η, l = 1, n− 1;

g(k, η

)= a0(k)− a1(k)η +

2a2(k)η2

3.

(1)Varying η in the above expressions allows one to find its maximum value,

which will be considered as a lower estimate η∗ of the maximum stability degree.In such case, the synthesis problem is to choose the controller parameters k

∗

that provide the lower estimate of the maximum stability degree η∗max, whichmay be called “quasimaximum stability degree” of the system.

Increasing η in each expression from (1) by the controller tunings change ispossible up to the value when λi

(k, η

)= 0.465. Thereby, determination of η∗max

and k∗

requires (n− 2) solutions of the following system of equations

λi(k, η

)= λ∗, i = 1, n− 2;

λj(k, η

)< λ∗, j = 1, n− 2, j 6= i;

fl(k, η

)≥ 0, l = 1, n− 1;

g(k, η

)≥ 0.

(2)

140

At each step, this results in the maximum value of η∗, and then we can choosethe maximum estimate among them.

In case the system has interval uncertainty in its parameters, the character-istic polynomial turns to the form A(s) = ans

n + an−1sn−1 + . . . + a0, with

intervals an > 0, and ai(k) ≤ ai(k) ≤ ai(k), i = 0, n. We apply interval meth-

ods to (2), which lead to the result which is valid for any value of ai(k). That iswhy it is necessary to set such values of ai(k) in λi(k, η) from (2), when λi(k, η)possess maximal values. Note that it is necessary to substitute such values ofinterval coefficients, which provide minimum of expressions fl(k, η) and gl(k, η).This way, the conditions (2) takes the form

ai−1(k) ai+2(k)(ai(k)− ai+1(k)(n− i− 1)η

)(ai+1(k)− ai+2(k)(n− i− 2)η

) = λ∗,

i = 1, n− 2;

aj−1(k) aj+2(k)(aj(k)− aj+1(k)(n− j − 1)η

)(aj+1(k)− aj+2(k)(n− j − 2)η

) < λ∗,

j = 1, n− 2, j 6= i;

al(k)− al+1(k)(n− l − 1)η ≥ 0, l = 1, n− 1;

a0(k)− a1(k)η +2a2(k)η2

3≥ 0.

(3)

The coefficients ai+1(k) and aj+1(k) can take both minimal and maximalvalues. Therefore, for the controller synthesis, it is necessary to consider fourKharitonov polynomials and three additional polynomials from (4):

D1 (s) = a0 + a1s+ a2s2 + a3s

3 + a4s4 + a5s

5 + a6s6 + . . . ,

D2 (s) = a0 + a1s+ a2s2 + a3s

3 + a4s4 + a5s

5 + a6s6 + . . . ,

D3 (s) = a0 + a1s+ a2s2 + a3s

3 + a4s4 + a5s

5 + a6s6 + . . .

For the verification of the condition g(k, η

), it is necessary to consider the

additional polynomial D4 (s) = a0 + a1s+ a2s2 + a3s

3 + a4s4 + a5s

5 + a6s6 + . . .

References:

[1] B.N. Petrov, N.I. Sokolov, A.V. Lipatov, Control Systems of Plantswith Variable Parameters: Engineering Methods of Analysis and Design,Mashinostroenie, Moscow, 1986. (in Russian)

141

Exponential enclosure techniques

for the computation of guaranteed state

enclosures in ValEncIA-IVP

Andreas Rauh1, Ekaterina Auer2, Ramona Westphal1,and Harald Aschemann1

1Chair of MechatronicsUniversity of Rostock

D-18059 Rostock, GermanyAndreas.Rauh,Ramona.Westphal,[emailprotected]

2Faculty of Engineering, INKOUniversity of Duisburg-EssenD-47048 Duisburg, [emailprotected]

Keywords: ordinary differential equations, initial value problems, complexinterval arithmetic, ValEncIA-IVP

ValEncIA-IVP is a verified solver which computes guaranteed enclosuresfor the solution of initial value problems (IVPs) for systems of ordinary differen-tial equations (ODEs) [1,3]. Originally, this solver has been implemented on thebasis of a simple iteration scheme that allows us to determine guaranteed stateenclosures for IVPs with continuously differentiable right hand sides. Thesestate enclosures are given by a numerically computed approximate solution (forexample by means of a classic explicit Euler or Runge-Kutta method) with addi-tive guaranteed error bounds. In [3], this solution procedure was extended by anexponential enclosure approach, allowing us to compute tighter state enclosuresfor asymptotically stable processes.

To efficiently exploit the exponential enclosure approach, the state equationsare first decoupled as far as possible. For that purpose, linear dynamic systemsare transformed into their real Jordan normal form. After that, the IVP is solvedfor the equivalent problem. Finally, guaranteed state enclosures in the originalcoordinates are determined by a suitable verified backward transformation.

However, this decoupling procedure does not manage to eliminate the wrap-ping effect in cases in which the (locally) linearized system model has an oscil-latory behavior. This results from the fact that the transformed system matrix

142

of the linearized model is no longer purely diagonal but has a block diagonalstructure. Geometrically, each block corresponds to a rotation (and scaling) ofstate enclosures between two subsequent time steps.

To eliminate the wrapping effect that originates from this rotation, theabove-given real-valued problem with a block diagonal system matrix can bereplaced by a transformation into a complex-valued diagonal form if the linearsystem model does not have multiple eigenvalues. In this contribution, a solutionprocedure for the computation of state enclosures is presented which operateson complex-valued IVPs in the corresponding normal form. This allows us todetermine contracting state enclosures for linear ODE systems with asymptot-ically stable, conjugate complex eigenvalues of multiplicity one by means of acomplex-valued exponential enclosure approach with a suitable backward trans-formation onto the original problem.

The theory is demonstrated using selected real-life applications from the fieldof control engineering. Moreover, examples are presented to show the benefits ofapplying the corresponding transformation also to linear dynamic systems withmultiple eigenvalues and uncertain parameters as well as to nonlinear processeswhich exhibit oscillatory dynamics. Finally, conclusions and an outlook on howto extend the corresponding techniques to solving IVPs for differential-algebraicequations in ValEncIA-IVP [4] are given.

References:

[1] E. Auer, A. Rauh, E.P. Hofer, and W. Luther, Validated model-ing of mechanical systems with SmartMOBILE: improvement of perfor-mance by ValEncIA-IVP, Lecture Notes in Computer Science, Vol. 5045,Springer, 2008, pp. 1–27.

[2] A. Rauh and H. Aschemann, Structural analysis for the design of re-liable controllers and state estimators for uncertain dynamical systems,In: M. Gunther, A. Bartel, S. Schops, M. Striebel, and M. Brunk (Eds.),Progress in Industrial Mathematics at ECMI 2010, Springer, Mathematicsin Industry, 17 (2012), pp. 263–269.

[3] A. Rauh and E. Auer, Verified simulation of ODEs and DAEs inValEncIA-IVP, Reliable Computing, 15 (2011), No. 4, pp. 370–381.

[4] A. Rauh, M. Brill, C. Gunther, A novel interval arithmetic approachfor solving differential-algebraic equations with ValEncIA-IVP, Interna-tional Journal of Applied Mathematics and Computer Science, 19 (2009),No. 3, pp. 381–397.

143

Interval methods for

model-predictive control and

sensitivity-based state estimation

of solid oxide fuel cell systems

Andreas Rauh, Luise Senkel, Thomas Dotschel, Julia Kersten,and Harald Aschemann

Chair of Mechatronics, University of RostockD-18059 Rostock, Germany

Andreas.Rauh,Luise.Senkel,Thomas.Doetschel,Julia.Kersten,[emailprotected]

Keywords: interval arithmetic, predictive control, verified sensitivity-basedstate estimation, real-time implementation, experimental validation

Control-oriented mathematical models for the thermal behavior of solid ox-ide fuel cells (SOFCs) [1] are characterized by the fact that internal parameterscan be determined only within certain intervals. This is caused by simplifica-tions which are necessary to make mathematical system models usable for thesynthesis of control strategies such that they can be evaluated in real time. Fur-thermore, temperature uncertainty due to limited measurement facilities in theinterior of a fuel cell stack module as well as limited knowledge about the spatialdistribution of the electrochemical reaction processes can be expressed by inter-val parameters in a natural way. Finally, disturbances result from the variationof electrical load demands which are a-priori unknown to the controller. Todetermine control strategies which prevent the violation of constraints on theadmissible maximum operating temperatures, it is reasonable to derive controllaws directly accounting for the above-mentioned uncertainties.

The basic approaches considered for this purpose are model-predictive con-trol as well as sensitivity-based state and parameter estimation. Both proce-dures are extended by using interval arithmetic to obtain a verified implemen-tation which directly accounts for uncertain variables with a bounded range.

Model-predictive control approaches are well-known means to stabilize dy-namic systems and to compute input signals online which allow for the trackingof desired state trajectories. These control procedures, which are partially im-plemented by means of algorithmic differentiation, are inherently robust and

144

can, therefore, be used to compensate unknown disturbances to some extent [2].This holds even if the disturbances are neglected during the derivation of thepredictive control strategy.

In this contribution, different verified extensions are described for the de-sign of model-predictive control strategies. These controllers are implementedby applying interval arithmetic procedures in real time. The use of intervalarithmetic allows one to design controllers which definitely prevent the viola-tion of predefined tolerance intervals around the desired state trajectories underconsideration of predefined limitations for the actuator operating range [3].

Like any other interval arithmetic procedure for the evaluation of dynamicsystem models, interval-based predictive control procedures suffer from overesti-mation due to multiple dependencies on identical interval variables as well as thewrapping effect. In the case of predictive control procedures, this overestimationmay lead to control strategies which are more conservative than necessary. Todetect overestimation in the interval evaluation of the predictive control proce-dure, physical conservation properties (derived on the basis of the first law ofthermodynamics) can be exploited in an algebraic consistency test that can beevaluated in real time in parallel to the computation of the control law.

Finally, the implementation of the interval-based predictive control proce-dure is described for a SOFC test rig available at the Chair of Mechatronics atthe University of Rostock. Here, non-measured state variables are reconstructedby a verified sensitivity-based observer [4]. This contribution is concluded byan outlook on future work focusing on algorithmic improvements for a reliablereal-time capable control as well as state and parameter estimation.

References:

[1] R. Bove and S. Ubertini (Eds.), Modeling Solid Oxide Fuel Cells,Springer, Berlin, 2008.

[2] Y. Cao and W.-H. Chen, Automatic differentiation based nonlinearmodel predictive control of satellites using magneto-torquers, Proc. ofIEEE Conf. on Industrial Electronics and Applications, ICIEA 2009, Xi’an,China, 2009, pp. 913–918.

[3] A. Rauh, J. Kersten, E. Auer, and H. Aschemann, Sensitivity-based feedforward and feedback control for uncertain systems, Computing,2012, No. 2–4, pp. 357–367.

[4] A. Rauh, L. Senkel, and H. Aschemann, Sensitivity-based state andparameter estimation for fuel cell systems, Proc. of 7th IFAC Symposiumon Robust Control Design, Aalborg, Denmark, 2012.

145

On computer-aided proof

of the correctness of non-polynomial

oscillator realization

of the generalized Verma module

for non-linear superalgebras

Alexander Reshetnyak1, Andrei Kuleshov2

and Vladimir Starichkov3

1Institute of Strength Physics and Materials Science SB RAS2/4, Akademicheskii ave., 634021 Tomsk, Russia

2 Elecard Company, 3 Razvitiya ave., 634021 Tomsk, Russia3Institute of Cryptography, Communication and Computer Sciences

70, Michurinskii ave., 119602 Moscow, [emailprotected], [emailprotected], [emailprotected]

Keywords: symbolic computations, Verma module, non-linear commutatoralgebra, C#, C++ implementation

We consider the problem of computer verification of the correctness of theoscillator realization over Heisenberg superalgebra A1,2 of a special nonlinearcommutator superalgebra A(Y (1), AdSd) with 3 odd (t0, t1, t

+1 ) and 6 even

(l0, g0, l1, l+1 , l2, l

+2 ), generators within symbolic computational approach by

means of new programm NcNlSuperalgebra on C# [1] (having the Russian Cer-tificate of State Registration No.2010611602). The above superalgebra naturallyarises within the procedure of construction of the Lagrangian formulation forthe higher-spin spin-tensors living on the anti-de-Sitter (AdS) d-dimensionalspace-time, characterizing by non-vanishing inverse square AdS-radius r. Theoscillator realization was based, firstly, on the generalized Verma module (ongeneral concepts of Verma module see [2]) explicit construction for the superal-gebra A(Y (1), AdSd) with involution. The feature of such a procedure is thatof the elements of Verma module |n0

1, n1, n2〉V , for n01 = 0, 1; n1, n2 ∈ N0, are

constructed with help of triangular-like decomposition of A(Y (1), AdSd) andhighest weight vector, |0〉V ≡ |0, 0, 0〉V , in opposite to Lie algebra case con-tain more number of elements in acting of Cartan-like and positive root vectorst0, l0, t1, l1 on |n0

1, n1, n2〉V in corresponding linear combination when the values

146

of the components n1, n2 become large. Second, there exists one-to-one corre-spondence between the constructed generalized Verma module and special Fockspace generated by the same number of Heisenberg superalgebra A1,2 gener-ating elements, f, f+, bi, b

+i , i = 1, 2, as the number of Hermitian elements in

A(Y (1), AdSd). However, the realization of the elements t0, l0, t1, l1 in termsof f, f+, bi, b

+i are non-polynomial in comparison with the Lie algebra case (for

r = 0).In order to check the validity of the oscillator realization of A(Y (1), AdSd)

over A1,2, i.e. that the found expressions for (t0, t1, t+1 , l0, g0, li, l

+i ) really satisfy

to the given algebraic relations of the non-linear superalgebra, we have elabo-rated the program NcNlSuperalgebra permitting to solve this problem within therestricted induction principle in power of the parameter r. We have checked thecorrectness of the oscillator realization up to the sixth power in r. NcNlSuper-algebra has some advantages and deficiencies in comparison with Plural knownas a non-commutative extension of the package Singular [3]. The computerprogram is planning both to translate on C+ to enhance its processing speedand to enlarge its possibilities for application to more complicated non-linearalgebra considered in [4] and [5].

References:

[1] A. Kuleshov, A.A. Reshetnyak, Programming Realization of Sym-bolic Computations for Non-linear Commutator Superalgebras over theHeisenberg–Weyl Superalgebra: Data Structures and Processing Methods,arXiv:0905.2705 [hep-th] (2009).

[2] J. Dixmier, Algebres Enveloppantes, Gauthier-Villars, Paris, 1974.

[3] V. Levandovskyy, Plural, a non-commutative extension of singular:past, present and future, In Proceedngs of the Int. Symposium on Math-ematical Theory of Networks and System (MTNS’06) (A. Iglesias, N.Takayama, eds), 2006.

[4] I.L. Buchbinder, A. Reshetnyak, General Lagrangian formulation forhigher spin fields with arbitrary index symmetry. I. Bosonic fields, NuclearPhysics B, 862 (2012), pp. 270–326.

[5] C. Burdik, A. Reshetnyak, On representations of Higher Spin sym-metry algebras for mixed-symmetry HS fields on AdS-spaces. Lagrangianformulation, J. Phys.: Conf. Ser., 343 (2012), p. 012102.

147

Interval arithmetic

over finitely many endpoints

Siegfried M. Rump

Institute for Reliable Computing,Hamburg University of Technology,

Schwarzenbergstraße 95, 21071 Hamburg, Germanyand Visiting Professor at Waseda University,

Faculty of Science and Engineering,3–4–1 Okubo, Shinjuku-ku, Tokyo 169–8555, Japan

[emailprotected]

Keywords: interval arithmetic, enclosure methods, verified bounds

To my knowledge all definitions of interval arithmetic start with real end-points and prove properties. Then, for practical use, the definition is specializedto finitely many endpoints, where many of the mathematical properties are nolonger valid. There seems no treatment how to choose this finite set of endpointsto preserve as many mathematical properties as possible.

Here we define interval endpoints directly using a finite set which, for exam-ple, may be based on the IEEE 754 floating-point standard. The correspondinginterval operations emerge naturally from the corresponding power set opera-tions. We present necessary and sufficient conditions on this finite set to ensuredesirable mathematical properties, many of which are not satisfied by other def-initions. For example, an interval product contains zero if and only if one of thefactors does.

The key feature of the theoretical foundation is that “endpoints” of intervalsare not points but non-overlapping closed, half-open or open intervals, each ofwhich can be regarded as an atomic object. By using non-closed intervals amongits “endpoints”, intervals containing “arbitrarily large” and “arbitrarily close tobut not equal to” a real number can be handled. The latter may be zero defining“tiny” numbers, but also any other quantity including transcendental numbers.

Our scheme can be implemented straightforwardly using the IEEE 754 float-ing-point standard.

148

The bijective coding

in the constructive world of Rn

c

Gennady G. Ryabov, Vladimir A. Serov

Research Computer Center of Moscow State University (MSU SRCC)1, building 4, Leninskiye Gory, 119991 Moscow, Russia

Keywords: n-cube, quaternary coding, Hausdorff-Hemming metrics, sim-plicial partition, combinatorial filling, supercomputing

The development of bijective coding methods for the constructive world[1] of cubic structures in a standard lattice Rn

c (with given orthogonal-normalframe B = 0, e1, . . . , en in Rn), consisted of n-cubes, adjoining to each otherby (n − 1)-hyperfaces [2] is considered. Such coding provides a one-to-onecorrespondence between the n-digital ternary word D (di ∈ A = 0, 1, 2)and each k-face (k = 0, . . . , n) in an n-cube. Since it is possible to rep-resent each individual k-face as a Cartesian product (Π) of k unit intervalsI(ei) such, that ei ∈ B1 ⊂ B, and translation (T) across the rest (n − k)ej ∈ B2 ⊂ B(B2 = B \ B1), one may express a bijectivity property for the

k-face fnk: fnk(B1, B2) =∏k

I(ei) + Tn−k

(ej)[1:1]←→< d1, . . . , dn >, di = 2 for

ei ∈ B1, dj = 0, 1 for ej ∈ B2. The sets of all n-digital ternary words A∗n = <

d1, . . . , dn > are called cubants [3]. Let us supplement the alphabetA by the let-ter ∅ (empty set) and define a digit-wise operation “multiplication” for all wordson A′∗

n, A′ = ∅, 0, 1, 2: 0×0 = 0; 0×1 = 1×0 = ∅; 0×2 = 2×0 = 0; 1×1 = 1;

1 × 2 = 2 × 1 = 1; 2 × 2 = 2; ∅ × (0, 1, 2) = (0, 1, 2)× ∅ = ∅. Many operationson cubants and their properties were defined in [3], including:

1. The number of letters ∅ in the product of cubants D1 and D2 is equal aminimal path length across edges between bijective faces:

#(∅)(D1 ×D2) = Lmin(D1;D2). (1)

2. Let D∗1/D2 be a cubant for the furthest part in face D1 from face D2.

Then the algorithm for computing D∗1/D2 consists in analyzing all such pairs of

digits that d1i ∈ D1, d2i ∈ D2 . . . , and changing the letters in D1 in accordancewith the rules: for the case (d1i = 2; d2i = 0) change d∗1i = 1, and for the case(d1i = 2; d2i = 1) change d∗1i = 0; for the remaining cases there are no changesin D1. Thus, on the basis of (1):

maxD1→D2

Lmin(D∗1/D2;D2) = #(∅)((D∗

1/D2)×D2), (2)

149

maxD2→D1

Lmin(D∗2/D1;D1) = #(∅)((D∗

2/D1)×D1). (3)

With (2) and (3), we have a distance ρHH(D1, D2) = max#(∅)((D∗1/D2) ×

D2); #(∅)((D∗2/D1)×D1). All the k-faces of an n-cube form a finite Hausdorff-

Hemming metric space.The simplicial partition of an n-cube is such that each simplex is based on

successive circuit for n + 1 vertices, beginning at (00 . . .0) and completing at(11 . . . 1) under a Hemming distance 1 requirement for each successive pair ofvertices. Each step in the circuit is parallel to ei. The general number of all suchdifferent circuits in an n-cube is equal n!. The vertex set V and the edge set Eare calculated for circuit order (ei1, . . . ein) as follows: V = v0 = (00 . . . 0); vi =vi−1 + eis; s = 1, . . . , n; E = v0v1; v0v2; v0v3; v1v3; v2v3; . . . vn−1vn. V and Eform a 1-skeleton of the simplex. The circuit order for a canonical partition ofthe individual k-face is given on set B1 = eis : dis ∈ D; dis = 2 = ej1, . . . ejkby substitution P ∈ Sk (symmetric group): P (ej1, . . .ejk) = (em1, . . . emk).The following operations are realized analogously to the case of an n-cube.We denote the action of group Sk on D with respect to calculation of V andE as Θ, and the simplex with a 1-skeleton (V,E) as ∆. Then, Θ(D,P ) =

(V,E)[1:1]←→ ∆0(D,P ); ∆(D,P ) = ∆0(D,P )+T(eit), eit ∈ B2, t = 1, . . . , k. The

pair cubant-substitution (< d1, . . . , dn/m1, . . . ,mk >) can be entitled as sim-pant. The common alphabet consists of all the decimal figures and some tokens.Hence, each k-face consists of k! simplices, bijectivial to k! simpants, and theirgeneral number in an n-cube is F∆(In) =

∑nk=2 k!Ck

n2n−k, limn→∞ F∆(In)/n! =e2. A notion of combinatory filling for cubic and simplex structures in Rn

c isproposed. Finally, we discuss possibility of using modern supercomputers forcomputing on sets with given combinatorial filling.

References:

[1] Y.I. Manin, Classical computing, quantum computing, and Shor’s fac-toring algorithm, http://arxiv.org/abs/quant-ph/9903008v1 (March1999).

[2] N.P. Dolbilin, M.A. Shtan’ko, M.I. Shtogrin, Cubic manifolds inlattices, Russian Acad. Sci. Izv. Math., 44 (1995), No. 2, pp. 301–313.

[3] G.G. Ryabov, V.A. Serov, On the metric-topological computing in theconstructive world of cubic structures, Numerical Methods and Program-ming, 11 (2010), No. 2, pp. 326–335.

150

Estimation of model parameters

Ilshat R. Salakhov, Olga G. Kantor

Bashkir State University32, Zaki Validi Street450074 Ufa, Russia

[emailprotected]

Keywords: differential equations, Runge-Kutta method, system dynamicsmodels, estimation of model parameters

Mathematical modeling of economic processes and consecutive establish-ment of logical connections enable monitoring, control and management. It isthe most effective tool for solving various problems: problems of optimization,decision-making, and many others.

One of complex problems with a nonlinear feedback studying methods issystem dynamics, on the basis of which was construct a model (1). It wasdeveloped in the mid-twentieth century by professor of Massachusetts Instituteof Technology, J. Forrester. The aim of our work is to solve the inverse problemof determining the control parameters of system dynamics.

This problem is resolved on the model, which describes changes in the pop-ulation, taking into account the influence of many factors. Using complex nu-merical algorithms, the model was corrected to achieve the required accuracy ofdescription [1, 2]:

dN

dt= 8.139 · 10−22 ·N0.05 · S2 − 64.1 ·N0.03 · S0.3,

dD

dt= 560 ·D0.35 − 9900 · I, (1)

dI

dt= 0.131 · I−0.4 − 0.0072 · S0.092.

Where the unknown parameters of the model are: N - the population of theRussian Federation, pers.; D - per capita income for the year, rub./person; I -the consumer price index; S = N ·D

IIn forecasting population change, put the next problem. What should be

the system control parameters D and I to provide the necessary number in thecoming year, while maintaining an adequate description of the source data. Nextwe formulated optimality principles:

151

|N(t)−Nexp(t)| ≤ δ1; |D(t)−Dexp(t)| ≤ δ2; |I(t) − Iexp(t)| ≤ δ3,All system parameters are place in a given corridor values relative to experi-mental data.

AN ≤ 10%;AD ≤ 10%;AI ≤ 10%;

For all three equations the average approximation error is less than 10%.

|N(t)−Nexp(t+ ∆)| ≤ εN(t),

Provided the necessary predictive value of the population N change is ε = 0.001from the actual value in the last period of time.

To organize the computer simulation a software package of mathematicalmodeling methods and numerical algorithms was implemented including:

1. The direct problem solution of differential equations by numerical inte-gration with the help of the Runge-Kutta method.

2. The initial approximations of model parameters chosen through the trans-lation of the system of differential equations to integral equations.

3. Determination of ranges of coefficient variation in which the conditionsare adequately described.

4. Search for the model parameters by analyzing the optimality criteria.To optimize the planned experiment it is necessary to identify the ranges

of coefficient variation in which the conditions are adequately described. Weobtained that the coefficients of the first equation vary in the range [5; 9] and[58; 62.5], and the coefficients of the second equation vary in the ranges [325;820] and [0; 19.200].

Analyzing the results of the experiment showed that to provide populationgrowth from 0 to 0.1% it is necessary to increase per capita income from 1.4%to 27%, or increase the consumer price index from 5.4% to 7.3%.

References:

[1] S.I. Spivak, O.G. Kantor, I.R. Salakhov, Estimation of model pa-rameters of system dynamics, Journal Srednevolzhskaya Mathematical So-ciety, 13 (2011), No. 3, pp. 107–113.

[2] S.I. Spivak, O.G. Kantor, I.R. Salakhov, Modeling the Russian Fed-eration population by the system dynamic method, in Statistics. Mod-eling. Optimization, Publishing Center of South Ural State University,Chelyabinsk, 2011, pp. 239–246.

152

Interval pseudo-inverse matrices:

computation and applications

Pavel Saraev

Lipetsk State Technical University30, Moskovskaya st., 398600 Lipetsk, Russia

[emailprotected]

Keywords: interval pseudo-inversion, interval matrices, optimization

For any square interval matrix [A] ∈ IRn×n, an interval inverse matrix is

the minimal interval matrix [A]−1 ∈ IRn×n such that [A]−1 ⊃

A−1 : A ∈ [A]

[4]. It can be computed using an algorithm based on an interval method for realinverse matrix computation [2]. Generalizing techniques elaborated for intervalinverse matrices to singular square and rectangular matrices is of scientific andpractical interest.

The pseudo-inverse matrix A+ ∈ Rn×m for A ∈ Rm×n, also known as Moore-Penrose generalized inverse, is the only matrix satisfying the following condi-tions [1]: AA+A = A, A+AA+ = A+, (AA+)T = AA+, (A+A)T = A+A. Forany interval matrix [A] ∈ IR

m×n, we define the interval pseudo-inverse matrix[A]+ ∈ IR

n×m as the minimal interval matrix such that [A]+ ⊃ A+ : A ∈ [A].So [A]+ includes all real pseudo-inverse matrices A+ for all A ∈ [A]. We needan enclosure for [A]+ instead of exact interval pseudo-inverse matrix for mostapplications. This work presents the interval Greville algorithm for interval ma-trices pseudo-inverse enclosure. Let [A] ∈ IR

m×n, and [ak] be its k-th column,where k = 1, . . . , n. Let [Ak] be the submatrix of [A] constructed from the firstk columns of [A]: [Ak] =

[[a1] [a2] . . . [ak]

]. If k = 1 then [A1] = [a1]. For

k = 2, . . . , n, it is clear that [Ak] =[[Ak−1] [ak]

].

Let k = 1. Assume [d1] = ‖[a1]‖2 =∑m

i=1[ai1]2.

[A1]+ =

0, if [d1] = 0,

[a1]T /[d1], if [d1] > 0,

0 ∪ [a1]T /[d1], otherwise,

where 0 ∈ IRm is the null interval vector, and ∪ is the interval hull of the union

of interval vectors.Let k = 2, . . . , n.

[Ak]+ =

[[Ak−1]+(I − [ak][fk])

[fk]

],

153

where I is the identity matrix of the order m, and

[ck] = (I − [Ak−1][Ak−1]+)[ak], [dk] = ‖ck‖2,

[fk] =

[ck]T /[dk], if[dk] > 0,

[ak]T ([Ak−1]+)T [Ak−1]+/1 + ‖[Ak−1]+[ak]‖2, if[dk] = 0,

[ck]T /[dk] ∪ [ak]T ([Ak−1]+)T [Ak−1]+/1 + ‖[Ak−1]+[ak]‖2, otherwise.

Hence, [An]+ is the required enclosure for [A]+. The result can have infinitebounds in some cases, and the probability of such situations increases for wideand large matrices. Accuracy criterion can use tracing the defect in Moore-Penrose conditions, which is defined as

t = ‖[A][A]+[A]− [A]‖+ ‖[A]+[A][A]+ − [A]+‖+ ‖([A][A]+)T − [A][A]+‖+ ‖([A]+[A])T − [A]+[A]‖.

An interval pseudo-inverse matrix can be applied in optimization problemsfor determination of decision set bounds, also it can be used for unstable realmatrix pseudo-inversion detection. This can be done by computing an intervalpseudo-inverse of an ε-inflation [Aε] of a given real matrix A. When [Aε]

+ iswide or the defect is large, the pseudo-inversion is unstable.

Another interesting application is the guaranteed global parameter estima-tion in nonlinear least squares problems whose variables are separable. It isbased on relation u = Ψ(v)+y between linear and nonlinear vectors u and vrespectively, with known response vector y, where Ψ(v) is the matrix of basisfunctions built on the input-output data set [3]. For a subspace of nonlinear pa-rameters [v], an optimal subspace of linear parameters by [u] = [Ψ([v])]+y can beestimated. The work is supported by RFBR, project N 11-07-97504-r center a.

References:

[1] A. Albert, Regression and the Moore-Penrose Pseudoinverse, AcademicPress, New York and London, 1972.

[2] G. Alefeld, J. Herzberger, Introduction to Interval Computations,Academic Press, New York, 1983.

[3] G.H. Golub, V. Pereyra, The differentiation of pseudo-inverses andnonlinear least squares problems whose variables separate, SIAM J. Num.Anal., 10 (1973), pp. 413–432.

[4] J. Rohn, Inverse interval matrix, SIAM J. Num. Anal., 3 (1993), No. 3,pp. 864–870.

154

Calculation of potential and attraction

force of an ellipsoid

Alexander O. Savchenko

Institute of Computational Mathematicsand Mathematical Geophysics SB RAS

6, Lavrentiev ave.630090 Novosibirsk, [emailprotected]

This work is devoted to numerical integration of the potential and attractionforce of an ellipsoid. The problem is reduced to that of calculating the integralof a given density distribution with a singular kernel. An easy-to-implementsemi-analytical method to calculate the integral is proposed.

The main idea of the method is to represent the sought-for function in theform of a triple integral in such a way that the inner integral of the kernel can betaken analytically. In doing this, the kernel is considered as a weight function.To approximate the inner integral, a quadrature formula for the product of func-tions, one of which has an integrable singularity, is proposed. This approachenables one to obtain an integrand with a weak logarithmic singularity. Thissingularity can be easily eliminated by a change of variables in the next outerintegral. Thus, to calculate all the integrals, quadrature formulas without sin-gularities are obtained. Additionally, the functions to be calculated do not havelarge values within the integration domain. To obtain higher accuracy of the nu-merical calculations, it is sufficient to simply increase the number of integrationpoints along each of the coordinates. This approach is not always acceptable inmany other integration methods because of the presence of a singularity in theintegrands.

The method is illustrated by numerical experiments for which complicatedtest functions are constructed. These functions, which are the exact potentialand exact attraction force of an ellipsoid of revolution with an elliptical distri-bution of density, have value of its own and can be used for other purposes.

References:

[1] A.O. Savchenko, Calculation of the volume potential for ellipsoidalbodies, Sibirskii Zhurnal Industrial’noi Matematiki, 15 (2012), No. 1,pp. 123–131.

155

A numerical verification method for

solutions to systems of elliptic partial

differential equations

Kouta Sekine1, Akitoshi Takayasu2 and Shin’ichi Oishi2,3

1Graduate School of Fundamental Science and Engineering, Waseda University2Faculty of Science and Engineering, Waseda University

3CREST, JST

3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan

[emailprotected]

Keywords: Computer assisted proof, Finite element method, systems ofelliptic partial differential equations

In this talk, a method of computer-assisted proof is proposed for systems ofelliptic partial differential equations:

−ε2∆u = f(u)− δv, in Ω,−∆v = u− γv, in Ω,u = v = 0, on ∂Ω.

(1)

Here, Ω is a bounded polygonal domain in R2. ε 6= 0, γ and δ are real parameters.A mapping f : H1

0 (Ω) → L2(Ω) is assumed to be Frechet differentiable. Whenu is a known function, the boundary value problem:

−∆v = u− γv, in Ω,v = 0, on ∂Ω,

(2)

has a unique solution. Then, v is presented by v = Bu, where B : L2(Ω) →H1

0 (Ω) is a solution operator of (2). Substituting this for (1), it follows

−∆u =

1

ε2(f(u)− δBu) , in Ω,

u = 0, on ∂Ω.(3)

Transforming (1) into (2) and (3) allows the verification of solutions. Y. Watan-abe has studied this type of system (1) by Nakao’s theory, which is based on

156

fixed-point theorems. Using the Newton-Kantorovich theorem with the opera-tor norm ‖B‖L2,H1

0, a verification method for (2) and (3) is proposed. If γ is

not an eigenvalue λ of the Laplace operator, there exists the solution operatorB. The operator norm ‖B‖L2,H1

0can be estimated as follows:

‖B‖L2,H10≤ Ce,2K, (4)

where Ce,2 is the Poincare constant and

K := max

∣∣∣∣λ

λ+ γ

∣∣∣∣ : λ is eigenvalue of the Laplace operator, γ ∈ R \ λ.

Hence, the upper bound of operator norm ‖B‖L2,H10

is obtained simply by theeigenvalue λ. Our verification method is based on the following two studies. Averified evaluation for eigenvalues of the Laplace operator has been shown byX. Liu and S. Oishi [2]. A. Takayasu, X. Liu and S. Oishi have proposed theverification method for solutions to nonlinear partial differential equations usingthe Newton-Kantorovich theorem [3]. In our procedure, approximate solutionsu and v of the system (1) are calculated by the finite element method. Theinequality (4) yields a rigorous upper bound of the norm ‖B‖L2,H1

0, which leads

to the guaranteed error estimate ‖u− u‖H10

based on the Newton-Kantorovichtheorem. Further, the upper bound of ‖v − v‖H1

0is given by the operator

norm ‖B‖L2,H10

and ‖u − u‖H10. Detailed proofs and numerical results will be

presented.

References:

[1] Y. Watanabe, A numerical verification method for two-coupled ellip-tic partial differential equation, Japan Journal of Industrial and AppliedMathematics, 26 (2009), pp.233-247

[2] X. Liu and S. Oishi, Verified eigenvalue evaluation for elliptic operatoron arbitrary polygonal domain, in preparation.

[3] A. Takayasu, X. Liu and S. Oishi, Verified computations to semilinearelliptic boundary value problems on arbitrary polygonal domains, submit-ted to publication.

157

Processing measurement uncertainty:

from intervals and p-boxes to

probabilistic nested intervals

Konstantin K. Semenov1, Gennady N. Solopchenko1,and Vladik Kreinovich2

1Measur. Info. Techn. Dept., Saint-Petersburg State Polytechnic Univ., Russia2Computer Science Dept., University of Texas, El Paso, Texas 79968, [emailprotected], [emailprotected], [emailprotected]

Keywords: measurement uncertainty, interval computations, p-boxes, prob-abilistic nested intervals

Due to measurement errors, the result y = f(x1, . . . , xn) of processingmeasurement outcomes is, in general, different from the desired result y =f(x1, . . . , xn) of processing actual (unknown) values xi. It is desirable to es-

timate the difference ∆ydef= y − y [4].

When we only know the bounds ∆i on measurement errors ∆xidef= xi − xi,

the only information that we have about y is that y ∈ ydef= f(x1, . . . , xn) :

x1 ∈ x1, . . . , xn ∈ xn, where xi = [xi −∆i, xi + ∆i]. Computing such a rangey is one of the main problems solved by interval computations [2].

Often, in addition to the bounds ∆i, we have partial information about theprobability of different values ∆xi. A general probability distribution can be

described by the cumulative distribution function (cdf) F (x)def= Prob(η ≤ x).

Partial information means that instead of knowing the exact values F (x), weonly know bounds F (x) ≤ F (x) ≤ F (x). The corresponding “interval-valued”cdf [F (x), F (x)] is known as a probability box, or p-box, for short [1].

P-boxes are useful in decision making, where the objective is often to sat-isfy a given inequality-type constraint, and p-boxes provide the probability ofsatisfying this constraint. In many practical situations (e.g., in control appli-cations), the objective is to find how far is the actual value y from our esti-

mate y. We know that the desired probability pdef= Prob(−∆ ≤ η ≤ ∆) is

equal to F (∆)−F (−∆), so based on the known p-boxes, we can conclude that

p ≤ pdef= F (∆) − F (−∆). However, often, this p is an overestimation: e.g., for

∆ = 0, we have p = 0, while for p-boxes of finite width w, we have p = 2w.

158

To get better bounds for p, we use probabilistic nested intervals: 1-parametricfamilies of confidence intervals xi(α) for which Prob(xi ∈ xi(α)) ≥ 1 − α andxi(α) ⊆ xi(α

′) when α′ < α. E.g., when we have a systematic error componentwith known bounds [−∆si,∆si] and a normally distributed random error com-ponent with a known σi, the confidence intervals are obtained by adding theusual Gaussian confidence interval to the interval [xi −∆si, xi + ∆si].

Probabilistic nested intervals are a particular case of nested intervals [3].However, [3] focused on expert estimates, where it was reasonable to assume thatwhen we know that xi ∈ xi(α) with confidence 1− α, then y = f(x1, . . . , xn) ∈f(x1(α), . . . ,xn(α)) with the same confidence 1 − α. This assumption led toexplicit formulas for propagating expert-related nested intervals through com-putations.

In contrast, it is usually assumed that random errors of different measure-ments are independent [4]; in this case, when for each i, we have xi ∈ xi(α) withprobability ≥ 1−α, then we can only conclude that (x1, . . . , xn) ∈ x1(α)× . . .×xn(α) (and thus, that y = f(x1, . . . , xn) ∈ f(x1(α), . . . ,xn(α))) with probabil-ity ≤ (1−α)n ≪ 1−α. So, we need new formulas for propagating probabilisticnested intervals. Such formulas will be described in the talk.

When measurement errors ∆xi are small, we can safely ignore terms quadratic(and of higher order) in ∆xi. For this linearized case, we can use automaticdifferentiation to design efficient algorithms. We can further speed up compu-tations because in practice, inputs are usually known with 5-10% accuracy. Insuch situations, the result can only be computed with a similar 1-digit accuracy,so there is no need to perform iterations that improve the 2nd digit. A practicalexample of such a speed-up will be presented.

References:

[1] S. Ferson, RAMAS Risk Calc 4.0: Risk Assessment with UncertainNumbers, CRC Press, Boca Raton, Florida, 2002.

[2] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to IntervalAnalysis, SIAM, Philadelphia, 2009.

[3] H.T. Nguyen, V. Kreinovich, Nested intervals and sets: concepts, re-lations to fuzzy sets, and applications, in Applications of Interval Compu-tations (R.B. Kearfott et al., eds.), Kluwer, Dordrecht, 1996, pp. 245–290.

[4] S. Rabinovich, Measurement Errors and Uncertainties: Theory andPractice, Springer, New York, 2005.

159

Deterministic global optimization

using the Lipschitz condition

Yaroslav D. Sergeyev

University of Calabria, Rende, Italyand N.I. Lobatchevsky University of Nizhni Novgorod, Russia

Via P. Bucci, Cubo 42-C, 87036 Rende (CS), [emailprotected]

Keywords: global optimization, Lipschitz condition, partitioning strategies

In this lecture, the global optimization problem of a multidimensional func-tion satisfying the Lipschitz condition with an unknown Lipschitz constant overa multi-dimensional box is considered. It is supposed that the objective functioncan be “black box”, multiextremal, and non-differentiable. It is also assumedthat evaluation of the objective function at a point is a time-consuming oper-ation. Many algorithms for solving this problem have been discussed in theliterature (see [1–12] and references given therein). They can be distinguished,for example, by the way of obtaining an information about the Lipschitz con-stant and by the strategy used to explore the search domain.

Different exploration techniques based on various adaptive partition strate-gies are analyzed. The main attention is dedicated to diagonal algorithms, sincethey have a number of attractive theoretical properties and have proved to beefficient in solving applied problems. In these algorithms, the search box is adap-tively partitioned into sub-boxes and the objective function is evaluated only attwo vertices corresponding to the main diagonal of the generated sub-boxes.

It is demonstrated that the traditional diagonal partition strategies do notfulfill the requirements of computational efficiency because of executing manyredundant evaluations of the objective function. A new adaptive diagonal par-tition strategy that allows one to avoid such computational redundancy is de-scribed. Some powerful multidimensional global optimization algorithms basedon the new strategy are introduced. Results of extensive numerical experimentsperformed to test the methods proposed demonstrate their advantages with re-spect to diagonal algorithms in terms of both number of trials of the objectivefunction and qualitative analysis of the search domain, which is characterizedby the number of generated boxes. Finally, problems with Lipschitz first deriva-tives are considered and connections between the Lipschitz global optimizationand interval analysis global optimization are discussed.

160

References:

[1] L.G. Casado, I. Garcia, Ya.D. Sergeyev, Interval algorithms forfinding the minimal root in a set of multiextremal non-differentiable one-dimensional functions, SIAM J. Scientific Computing, 24 (2002), No. 2,pp. 359–376.

[2] Yu.G. Evtushenko, M.A. Posypkin, An application of the nonuni-form covering method to global optimization of mixed integer nonlinearproblems, Comput. Math. Math. Phys., 51 (2011), No. 8, pp. 1286–1298.

[3] R. Horst and P.M. Pardalos, eds., Handbook of Global Optimization,Kluwer Academic Publishers, Dordrecht, 1995.

[4] D.E. Kvasov, Ya.D. Sergeyev, Univariate geometric Lipschitz globaloptimization algorithms, NACO, 2 (2012), No. 1, 69–90.

[5] D. Lera, Ya.D. Sergeyev, An information global minimization algo-rithm using the local improvement technique, J. Global Optimization, 48(2010), No. 1, 99–112.

[6] J. Pinter, Global Optimization in Action, Kluwer, Dordrecht, 1996.

[7] Ya.D. Sergeyev, D.E. Kvasov, Diagonal Global Optimization Methods,FizMatLit, Moscow, 2008.

[8] Ya.D. Sergeyev, D.E. Kvasov, Lipschitz global optimization, WileyEncyclopaedia of Operations Research and Management Science, 4 (2011),pp. 2812–2828.

[9] Ya.D. Sergeyev, D.E. Kvasov, Global search based on efficient diago-nal partitions and a set of Lipschitz constants, SIAM J. Optimization, 16(2006), No. 3, pp. 910–937.

[10] Ya.D. Sergeyev, P. Pugliese, D. Famularo, Index information algo-rithm with local tuning for solving multidimensional global optimizationproblems with multiextremal constraints, Math. Programming, 96 (2003),No. 3, pp. 489–512.

[11] R.G. Strongin, Ya.D. Sergeyev, Global optimization with non-convexconstraints: sequential and parallel algorithms, Kluwer, Dordrecht, 2000.

[12] A.A. Zhigljavsky, A. Zilinskas, Stochastic Global Optimization, Sprin-ger, New York, 2008.

161

The Infinity Computer

and numerical computations

with infinite and infinitesimal numbers

Yaroslav D. Sergeyev

University of Calabria, Rende, Italyand N.I. Lobatchevsky University of Nizhni Novgorod, Russia

Via P. Bucci, Cubo 42-C87036 Rende (CS), Italy

[emailprotected]

Keywords: Infinities and infinitesimals, Infinity Computer, numerical com-putations

A new methodology (see [6,9]) allowing one to execute numerical computa-tions with finite, infinite, and infinitesimal numbers on a new type of a com-putational device called the Infinity Computer (EU, USA, and Russian patentshave been granted) is introduced. A calculator using the Infinity Computertechnology is presented during the talk. The new approach (its relations withtraditional approaches are discussed in [4-6,9]) applies the principle ‘The partis less than the whole’ to all numbers (finite, infinite, and infinitesimal) and toall sets and processes (finite and infinite). It is shown that it becomes possibleto write down finite, infinite, and infinitesimal numbers by a finite number ofsymbols as particular cases of a unique framework (different from that of thenon-standard Analysis). The new methodology (among other things) introducesinfinite integers having both cardinal and ordinal properties.

The point of view on infinite and infinitesimal quantities presented in thistalk uses strongly two methodological ideas borrowed from the modern Physics:relativity and interrelations holding between the object of an observation andthe tool used for this observation. Thus, connections between different numeralsystems used to describe mathematical objects and the objects themselves areemphasized. The new computational methodology gives the possibility bothto execute numerical (not symbolic) computations of a new type and simplifiesfields of Mathematics where the usage of the infinity and/or infinitesimals is nec-essary. Numerous examples and applications are given: differential equations,divergent series, fractals, linear and non-linear optimization, numerical differ-entiation, percolation, probability theory, Turing machines, etc. (see [1-10]).

162

A lot of additional information on the new methodology (papers, reviews,awards, etc.) can be downloaded from http://www.theinfinitycomputer.com

References:

[1] L. D’Alotto, Cellular automata using infinite computations, AppliedMathematics and Computation, 218 (2012), No. 16, pp. 8077–8082.

[2] S. De Cosmis, R. De Leone, The use of Grossone in MathematicalProgramming and Operations Research, Applied Mathematics and Com-putation, 218 (2012), No. 16, pp. 8029–8038.

[3] D.I. Iudin, Ya.D. Sergeyev, M. Hayakawa, Interpretation of perco-lation in terms of infinity computations, Applied Mathematics and Com-putation, 218 (2012), No. 16, pp. 8099–8111.

[4] G. Lolli, Infinitesimals and infinites in the history of Mathematics: Abrief survey, Applied Mathematics and Computation, 218 (2012), No. 16,pp. 7979–7988.

[5] M. Margenstern, Using Grossone to count the number of elements ofinfinite sets and the connection with bijections, p-Adic Numbers, Ultra-metric Analysis and Applications, 3 (2011), No. 3, pp. 196–204.

[6] Ya.D. Sergeyev, A new applied approach for executing computationswith infinite and infinitesimal quantities, Informatica, 19 (2008), No. 4,pp. 567–596.

[7] Ya.D. Sergeyev, Numerical point of view on Calculus for functions as-suming finite, infinite, and infinitesimal values over finite, infinite, andinfinitesimal domains, Nonlinear Analysis Series A: Theory, Methods &Applications, 71 (2009), No. 12, pp. e1688–e1707.

[8] Ya.D. Sergeyev, A. Garro, Observability of Turing Machines: a re-finement of the theory of computation, Informatica, 21 (2010), No. 3,pp. 425–454.

[9] Ya.D. Sergeyev, Lagrange Lecture: Methodology of numerical compu-tations with infinities and infinitesimals, Rendiconti del Seminario Matem-atico dell’Universit e del Politecnico di Torino, 68 (2010), No. 2,pp. 95–113.

[10] Ya.D. Sergeyev, Higher order numerical differentiation on the InfinityComputer, Optimization Letters, 5 (2011), pp. 575–585.

163

Towards a more realistic

treatment of uncertainty in

Earth and environmental sciences:

beyond a simplified subdivision into

interval and random components

Christian Servin1,4, Craig Tweedie2,4, and Aaron Velasco3,4

1Computational Sciences Program2Environmental Science and Engineering Program

3Department of Geological Sciences4Cyber-ShARE Center

University of Texas, El Paso, Texas 79968, [emailprotected], [emailprotected], [emailprotected]

Keywords: interval computations, periodic error, time series, resolution

When processing data, it is often very important to take into account mea-surement uncertainty, i.e., the fact that the measurement results x are, in gen-eral, different from the actual (unknown) value x of the corresponding quantity.

In measurement theory, traditionally, a measurement error ∆xdef= x− x is sub-

divided into random and systematic components ∆x = ∆sx + ∆rx (see, e.g.,[2]): the systematic error component ∆sx is usually defined as the expectedvalue ∆sx = E[∆x], while the random error component is usually defined as

the difference ∆rxdef= ∆x − ∆sx. By definition, the systematic error compo-

nent does not change from measurement to measurement, while the randomerrors ∆rx corresponding to different measurements are usually assumed to beindependent.

For the systematic error component, we only know the upper bound ∆s

for which |∆sx| ≤ ∆s. Thus, the only information that we have about thevalue of this component is that it belongs to the interval [−∆s,∆s]. Because ofthis fact, interval computations are used for processing the systematic errors.The random error component is usually characterized by the correspondingprobability distribution; often, it is assumed to be Gaussian, with a knownstandard deviation σ.

164

For many Earth and environmental science measurements, the differences∆rx = ∆x −∆sx corresponding to nearby moments of time are often stronglycorrelated. For example, meteorological sensors may have daytime or nighttimebiases, or winter and summer biases. To capture this correlation, environmen-tal science researchers proposed an empirically successful semi-heuristic three-component model of measurement error. In this model, the difference ∆x−∆sxis represented as a combination of a “truly random” error ∆tx (which is inde-pendent from one measurement to another), and a new “periodic” component∆px.

We provide a theoretical explanation for this heuristic three-componentmodel, and we show how to extend the traditional interval and probabilisticerror propagation techniques to this three-component model. Our preliminaryresults are described in [3].

In practice, instead of a single quantity x (temperature, density, etc.), weoften have a field x(s) in which the value of the quantity changes with a spatiallocation s (and, sometimes, with time t). For fields, the measurement errorx(s) − x(s) is caused not only by the inaccuracy of the measuring instrument(MI), but also by the fact that the output x(s) of the MI is determined by theaverage

∫K(s− s′) · x(s′) ds′ over a neighborhood s′ ≈ s (here, K(s) describes

the instrument’s spatial resolution). In the talk, we describe how to take intoaccount this additional uncertainty, and how to decrease it by merging (“fusing”)two results x1(s) and x2(s) obtained from measuring the same field x(s); ourpreliminary results appeared in [1]. As a case study, we consider the combinationof density descriptions obtained from seismic measurements and from gravitymeasurements.

References:

[1] O. Ochoa, A. A. Velasco, C. Servin, and V. Kreinovich, ModelFusion under Probabilistic and lnterval Uncertainty, with Application toEarth Sciences, International Journal of Reliability and Safety, 6 (2012),No. 1–3, pp. 167–187.

[2] S. Rabinovich, Measurement Errors and Uncertainties: Theory andPractice, American Institute of Physics, New York, 2005.

[3] C. Servin, M. Ceberio, A. Jaimes, C. Tweedie, and V. Kreinovich,How to Describe and Propagate Uncertainty When Processing Time Se-ries, Technical Report UTEP-CS-12-01a, Univ. of Texas at El Paso, Dept.Computer Science, 2012, http://www.cs.utep.edu/vladik/2012/tr12-01a.pdf

165

Boundary intervals and

visualization of AE-solution sets

for interval system of linear equations

Irene A. Sharaya

Institute of Computational Technologies SD RAS6, Lavrentiev ave., 630090 Novosibirsk, Russia

[emailprotected]

Keywords: interval linear system, AE-solution set, boundary interval

Theory of AE-solution sets (AEss) for interval systems of linear equationswas developed by Shary (see e.g. [1]). The united solution set (USS), thetolerable solution set and the controllable solution set are particular cases ofthe AE-solution sets.

It is known [1] that the intersection of an AE-solution set with a closedorthant is a convex polyhedron. A system of linear inequalities determining thispolyhedron may be obtained from the initial interval system of equations.

Programs that allow ‘to see’ the AE-solution set are useful in analysis of itproperties and in debugging the methods for estimation of this set. By now,there are several such programs:

author(s) language addressmaximum sizeof system andsolution type

processunbounded

sets

processthinsets

Rump Z. Matlab [2] 3× 3 USS − +Kramer W.,Paw G.

Java [3] 3× 3 USS ∓ ∓

Kramer W.,Braun S.

Maple [3] 3× 3 USS ∓ ∓

Popova E.D. Mathematica [4] 3× 3 USS ∓ −Popova E.D. Mathematica [5] 2× 2 AEss ∓ −Sharaya I.A. PostScript [6] 2× 2 AEss + +

These programs handle the systems with no more than 3 rows and have diffi-culties in processing unbounded and thin sets.

What will be presented in the talk are• a new MATLAB package for visualization of AE-solution sets• and boundary intervals method as a base of this package.

166

Boundary intervals method is a new visualization method for solution set oflinear inequalities system. It may be used (and modified) for

— solution set to system of two-sided linear inequalities and— AE-solution set to interval system of linear equations.

The key object of the method is a boundary interval.

Definition. Let us be given the system of linear inequalities Ax ≥ b withA ∈ Rm×2, b ∈ Rm. If the set x | (Ai:x = bi) & (Ax ≥ b) for i ∈ 1, . . . ,mis not empty, we call it boundary interval.

A boundary interval as a set of points on the plane may be a single point, asegment, a ray or a straight line. All edges of the set x | Ax ≥ b are boundaryintervals. Some vertices of this set may be boundary intervals too.

Boundary intervals method allows ‘to see’ 2D and 3D AE-solution sets forinterval linear systems with rectangular matrices and can process unboundedand thin sets.

The work is supported by the State Program for Support of Leading ScientificSchools of Russian Federation (grant No. NSh-6293.2012.9).

References:

[1] S.P. Shary, A new technique in systems analysis under interval uncer-tainty and ambiguity, Reliable Computing, 8 (2002), No. 5, pp. 321–419.

[2] http://www.ti3.tu-harburg.de/rump/intlab/ (The Intlab is the Mat-lab toolbox for reliable computing. Intlab function plotlinsol plotsunited solution set of interval linear system in 2 or 3 unknowns.)

[3] W. Kramer, Computing and Visualizing Solution Sets of Interval LinearSystems, Preprint BUW-WRSWT 2006/8,http://www2.math.uni-wuppertal.de/~xsc/preprints/prep_06_8.pdf

[4] http://cose.math.bas.bg/webMathematica/webComputing

/IntervalSSet3D.jsp

[5] http://cose.math.bas.bg/webMathematica/webComputing

/IntervalSSet-AE.jsp

[6] http://www.nsc.ru/interval/Programing/AE-solset.ps (The inputdata – matrix, right-hand side vector of the system and, optionally, initialenclosing box, – can be entered into this file by a text editor. Then theprogram can be executed by any PostScript interpreter.)

167

Randomized interval methods

for global optimization

Sergey P. Shary1, Nikita V. Panov2

1Institute of computational technologies SD RAS1,2Institute of design and technology for computing machinery SD RAS

Novosibirsk, [emailprotected], [emailprotected]

Keywords: global optimization, randomized interval algorithms, intervalsimulated annealing, interval genetic algorithm

Our work is devoted to the problem of global optimization of a real-valuedfunction f : Rn ⊇X → R over an axis-aligned interval box X :

find minx∈X

f(x). (1)

During the last decades, various interval techniques [1,2,3] have been developedfor the solution of the problem (1). They enable one to reliably compute two-sided bounds for both the global optimum of f and the argument at which itis attained. The common basis of these methods is adaptive, according to the“branch-and-bound” strategy, subdivision of the objective function domain X

combined with interval evaluation of the ranges of f over the resulting subboxesof X. When executing, such methods iteratively refine interval estimates ofthe objective function through splitting, step by step, the boxes on which theestimate is the best at the current step.

Extensive employing such interval global optimization algorithms has re-vealed a number of problems. If the dimension of the problem is large, and/orthe objective function f has a lot of local extremums, the deterministic inter-val global optimization algorithms can have low performance and produce ananswer with considerable overestimation.

Usual ways to improve efficiency of the interval global optimization meth-ods include increasing accuracy of interval evaluation, incorporating, into thealgorithm, procedures that exclude the subboxes without the optimum, etc. Forcomplex objective functions, one of the main sources of inefficiency is a largeamount of unnecessary splittings, and it makes sense to pay more attention tothe selection of the box to be subdivied at each step of the algorithm.

168

In our work, we develop interval global optimization algorithms of a newtype that are based on the traditional adaptive subdivision-estimation of thesearch area, but involve randomization, i. e. introduce a stochastic control intothe usual deterministic scheme [4]. This combination provides improved compu-tational efficiency in comparison with ordinary purely deterministic algorithms.Besides, implementation of the above general idea may result in either strictlyverified algorithms or those providing only probabilistic guarantees of the an-swer.

The simplest randomized interval optimization algorithms are “random in-terval splitting” [4] and “random interval priority splitting” [5]. The latter isan improvement of the former one supplied with so-called prioritization of thesubboxes according to their width and/or current estimate.

More involved algorithms we have constructed on this way are interval sim-ulated annealing [4] and a few interval evolution algorithms that develop thegeneral idea of genetic algorithms [5,6].

In the randomized interval methods, the use of stochastic control passesfacilitates solving complex problems more efficiently than with the traditionaldeterministic interval methods. In particular, we feature “verified versions” ofsuch algorithms that provide, in spite of their stochastic character, numericalverification of the answer and produce, on output, two-sided interval bounds forthe global optimum.

References:

[1] H. Ratschek, J. Rokne, New Computer Methods for Global Optimiza-tion, Ellis Horwood, Halsted Press, Chichester, New York, 1988.

[2] E. Hansen, G.W. Walster, Global Optimization Using Interval Anal-ysis, Marcel Dekker, New York, 2004.

[3] R.B. Kearfott, Rigorous Global Search: Continuous Problems, Kluwer,Dordrecht, 1996.

[4] S.P. Shary, Randomized algorithms in interval global optimization, Nu-merical Analysis and Applications, 1 (2008), No. 4, pp. 376–389.

[5] N.V. Panov, A unification of stochastic and interval approaches for thesolution of the problem of global optimization of functions, ComputationalTechnologies, 14 (2009), No. 5, pp. 49–65 (in Russian).

[6] N.V. Panov, S.P. Shary, Interval evolutionary algorithm for globaloptimization, Transactions of Altai State University, No. 1 (69) (2011),vol. 2, pp. 108–113 (in Russian).

169

Verified templates for design

of combinatorial algorithms

Nikolay V. Shilov

A.P. Ershov Institute of Informatics Systems, 6, Lavrentiev ave.630090 Novosibirsk, Russia

[emailprotected]

Keywords: dynamic programming, branch and bound, backtracking, for-mal specification, formal verification

There exists a split between reliable computing and program verificationcommunities: sometimes it seems that computing people assume that programcode that “implements” a reliable method can be justified by extensive testing,while verification people think that reliability of any specified computationalprogram can be formally verified in automatic mode from scratch. We try to finda compromise both extreme viewpoints by suggesting, formalizing and verifying(manually but formally) templates for design of algorithms for combinatorialoptimization.

In particular, we formalize three algorithmic design patterns that are corepatterns in the combinatorial optimization, namely: Dynamic Programming(DyP), Backtracking (BTR) and Branch-and-Bound (B&B). They can be for-malized as design templates, specified by correctness conditions, and formallyverified in Floyd – Hoare methodology [1]. BTR and B&B templates have beenconsidered in [2] in full details, DyP is sketched below. A methodological noveltyconsists in treatment (interpretation) of DyP as the set-theoretic least fix-point(lfp) in a virtual domain (according to Knaster–Tarski theorem).

Dynamic Programming [3] is a recursive method for solving optimizationproblems presented by appropriate Bellman equation. We can assume withoutloss of generality that the Bellman equation has the following canonical form

G(x) = if p(x) then f(x) else g(x, (G(ti(x)), i ∈ [1..n])),

where G : X → Y is the objective function, p ⊆ X is a known predicate,f : X → Y is a known function, g : X∗ → X is a known function with avariable (but finite) number of arguments n, and all ti : X → X , i ∈ [1..n] areknown functions also.

Dynamic Programming template (specified in Hoare style [1]) follows.

170

\\Precondition:D is a non-empty set of argument values,

S and P are ‘‘trivial’’ and ‘‘target’’ subsets in D,

F:2D →2D is a call-by-value total monotone function,

ρ:2D×2D →Bool is a call-by-value total function

monotone on the second argument\\Template:var Z:= S, Z1 : subsets of D;

repeat Z1:= Z ; Z:= F(Z) until (ρ(P,Z) or Z=Z1)

\\Postcondition:ρ(P,Z) ⇔ ρ(P, lfp λQ.(S∪F(Q)))

Proposition. (1) Dynamic Programming template is partially correct, i.e.for any input data that meets the precondition, the algorithm instantiated fromthe template either loops or halts in such a way that the postcondition holdsupon the termination. Assuming that for some input data the precondition ofthe Dynamic Programming template is valid, and the domain D is finite, thenthe algorithm instantiated from the template terminates after at most |D| iter-ations of the loop repeat-until.(2) Let us consider the above Bellman equation and let SPP : X → 2D bethe following support function: SPP (x) = if p(x) then x else x ∪(⋃

i∈[1..n] SPP (ti(x))). Let v ∈ X be any value. If to adopt (in the DyP tem-

plate) the graph of G on SPP (v) as D, a set (u, f(u))|p(u) & u ∈ SPP (v)as S, a singleton (v,G(v)) as P, a mapping Q 7→ (u,w) ∈ D | ∃w1, . . .wn : (t1(u), w1), . . . (tn(u), wn) ∈ Q & w = g(u,w1, . . . wn) as F: 2D → 2D,and ∃w : (v, w) ∈ (R ∩ Q) as ρ(R,Q) : 2D × 2D → Bool, then the instantiatedalgorithm computes G(v) in the following sense: it terminates after iteratingrepeat-until loop |SPP (v)| times at most, upon the termination (v,G(v)) ∈Zand there is no any w ∈ Y (other than G(v)) such that (v, w) ∈Z.

Some examples that illustrate the use of DyP template will be given in theconference talk and in a forthcoming full paper.

References:

[1] K.R. Apt, F.S. de Boer, E.-R. Olderog, Verification of Sequentialand Concurrent Programs, Springer, 2009.

[2] N.V. Shilov, Verification of backtracking and branch and bound de-sign templates, Modeling and Analysis of Information Systems, 18 (2011),No. 4, pp. 168–180 (in Russian).

[3] R. Bellman, The theory of dynamic programming, Bulletin of the Amer-ican Mathematical Society, 60 (1954), pp. 503–516.

171

Informativity of experiments and

uncertainty regions of model parameters

Semen I. Spivak

Bashkir State University,Institute of petrochemistry and catalysis, Russian Academy of Science

Ufa, [emailprotected]

Keywords: inverse problems, informativity of experiment, uncertainty re-gion

Our work considers problems of mathematical theory of measurements anal-ysis. We assume that a model describes the measurements within the accuracyof the latter, provided that the following set of inequalities is satisfied:

|Xexp −Xcalc| ≤ ε (1)

where ε is the vector of the maximum allowable inaccuracy of experimentalmeasurements of X .

We define the uncertainty range for each calculated parameter ki, i =1, . . . , n, as such an interval

di = [ min ki,max ki ] (2)

that the system (1) is consistent for some values of the input data within thatrange.

The formulation of the problems of determining the ranges (2) provided thatthe set of constraints (1) is satisfied belongs to L.V.Kantorovich [1]. Nowdays,the terms set-membership approach or error-bounded data are usually used inconnection to this approach.

The values of εj in the system of inequalities (1) are the characteristics ofthe maximum allowable experimental error. In such case, fulfillment of theconditions (2) means that the model describes the measurements within thelimits conditioned by the maximum allowable measurement error, which is quitereasonable.

In our work, we developed Kantorovich’s approach in application to thesolution of inverse problems of chemical kinetics [2].

172

A principal feature of Kantorovich’s approach is the fact that, based onmathematical programming ideas, it allows the measurement informativity tobe analyzed using solutions of the conjugate problem (or the dual problem, interms of linear programming). The solutions of the conjugate problem allowone to distinguish the points that define the minimum and maximum for eachof the constants from a large set of experimental data. If the range dj of acertain constant appears to be too large, analysis of the solution to the conjugateproblem allows us to build the plan of measurements (conditions and accuracyof new experiments) in order to reduce the range for the value defined by someadditional requirements.

Thus, the uncertainly ranges

di = [ min kj ,max kj ], j = 1, . . . ,m,

for the parameters kj , set by the equation (2), are ranges within which theinequality (1) is satisfied, i.e., within which the kinetic model does not con-tradict the measurements. The vector d = (d1 . . . dm) characterizes a degreeof uncertainty for each of the target parameters caused by measurement er-rors. Using this vector, we can determine the measurement accuracy in certainpoints required to ensure that the degree of uncertainty in the parameters doesnot exceed a preset value.

The multidimensional uncertainty region will be understood as a set of pointsthat correspond to parameter values in which the relation (1) is valid.

Thus, if the kinetic model of a reaction involves n parameters, the uncer-tainty region will be n-dimensional. Our goal is to find (in some sense) theuncertainty regions and their two-dimensional projections onto a plane definedby couples of parameters.

The major problem in the use of this method arises in the calculation of mul-tidimensional uncertainty regions. The problems that arise are of both mathe-matical and physicochemical nature. In particular, the physicochemical inter-pretation of uncertainty regions becomes the main problem. These problemsare the subject of our further studies in this direction.

References:

[1] L.V. Kantorovich, On some new approaches to numerical methods andreduction of observation, Siberian mathematical journal, 3 (1962), No. 5,pp. 701–709 (in Russian).

[2] S.I. Spivak, Informativity of kinetic measurements, Khimicheskaya pro-myshlennost segodnya, 2009, No. 9, pp. 52–56 (in Russian).

173

Analysis of non-uniqueness

of the solution of inverse problems

in the presence of measurements errors

Semen I. Spivak and Albina S. Ismagilova

Bashkir State UniversityUfa, Russia

[emailprotected]

Keywords: informativity, non-uniqueness, kinetic constants

This study deals with inverse problems of identification of mechanisms ofcomplex chemical reactions based on kinetic measurements.

The inverse problem consists in determining the rate constants of elementarysteps involved in the mechanism of a complex chemical reaction from experi-mental data on the concentrations of compounds involved in the reaction.

The main difficulty is that, generally, only some of the compounds involvein a reaction can be measured. This insufficient informativity results in thenon-uniqueness of the inverse problem solution.

The purpose of this article is a mathematical study of the informativityproblem:

- a classification of non-uniqueness types of solutions of inverse problems ofchemical kinetics depending on the type of the experiment is provided;

- a methodology is developed for analysis of informativity of kinetic mea-surements in the solution of inverse problems, which allows one to determinethe number and form of independent combinations of reaction rate constantsthat can be evaluated unambiguously from various kinetic experiment types;

- it is proven that the measurable characteristics of a reaction mechanismare invariant with respect to certain transformations of kinetic parameters. Itis proven that these transformations are group transformations (continuous ordiscrete, depending on the experiment type).

- a methodology is developed for reduction of systems of differential equationsof chemical kinetics to systems with smaller dimensionality under the conditionthat they remain adequate to the actual sets of measurements.

The non-uniqueness of solutions of inverse problems of chemical kinetics re-sults in the existence of uncertainty regions of kinetic constants. An uncertaintyregion will be understood as such a region [1] within which variation of kinetic

174

constants allows kinetic measurements to be described within their accuracy[2-3].

References:

[1] L.V. Kantorovich, About new approaches to the theory of treatmentof observations, Sibirskii Matematicheskii Zhurnal, 3 (1962), pp. 701–708.

[2] S.I. Spivak, M.G. Slinko, V.I. Timoshenko, V.Yu. Mashkin, In-terval estimation in the determination of parameters of a kinetic model,Reaction Kinetics and Catalysis Letters, 3 (1974), No. 1, pp. 105–113.

[3] S.I. Spivak, Informativity of kinetic measurements, Khimicheskaya pro-myshlennost’ segodnya, 2009, No. 9, pp. 52–56.

175

Interval estimation of system dynamics

model parameters

Semen I. Spivak, Olga G. Kantor

Institute of Social and Economic Research71, October Avenue; 450054 Ufa, Russia

o [emailprotected].

Keywords: system dynamics, two-sided bounds for model parameters,L.V. Kantorovich approach.

The method of system dynamics in the case study of a model with twovariables suggests dependency of the following form:

dx

dt= a1x

α1yβ1 − a2xα2yβ2 ,

dy

dt= a3x

α3yβ3 − a4xα4yβ4 .

(1)

The direct view of the system dynamics model (1) depends on the parametervalues ai, αi, βi , i = 1, 4, which are defined based upon available statisticaldata. In the case where the researcher takes into consideration the variables,special methods are applied to obtain point estimates of parameters in responseto the complex relationship between said variables. The aforementioned rela-tionship does not allow even a first approximation to determine the parametersof system dynamics models. Moreover, it is necessary to know the permissiblerange of variation for the performance of the numerical experiment to “cus-tomize” the model.

The proposed method is based upon a linearization of system (1). UsingMaclaurin expansion of the right-hand sides of the equations (1), based on theavailable observations, it is possible to identify point and interval estimatesspecifically for parameters ai, i = 1, 4. Therefore, the solution of the problemwill be carried out in two steps. On the first step, based on Maclaurin seriesexpansions of the equations (1), we define the point and interval estimates:a0i , i = 1, 4 and [a−i ; a+i ], i = 1, 4. In the second step, we compute pointand interval estimates for all the parameters of the model (1), using a linearexpansion of the equations in Taylor series centered at a0i , αi = 0, βi = 0,i = 1, 4. Estimation of the intervals [a−i ; a+i ], i = 1, 4, allows variation of theexpansion center at the second step.

176

The number of observations in practical problems is large, so the problemsolved a priori is overdetermined. Moreover, they are characteristically flawed;approximate initial data implies the failure of requirements for well-posed prob-lems. These circumstances significantly limit the number of methods for deter-mining point and interval estimates of the parameters of the system dynamicsmodels. In this regard, a particularly interesting approach of L.V. Kantorovich,who first floated the idea of obtaining accurate two-sided bounds for modelparameters and the location of desired surfaces and observed values.

The problem of determining the parameters of the model (1) for each of themodel equations separately, taking into account the above mentioned features,is reduced to solving an inconsistent system of m linear equations with n un-knowns. Therefore, to verify that the calculated and experimental data agreein the deviation, for example, in the first equation of system (1), we have toconsider

ηi =

(dx

dt

)calculated∣∣∣∣i

−(dx

dt

)experimental∣∣∣∣i

, i = 1,m. (2)

The standard way to solve the problem of determining the parameters ofthe model (1) is to minimize deviations ηi, i = 1,m in terms of a certainintroduced criterion. The basic selection criterion is the information on thedistribution of measurement errors. In real systems, such information is missingas a rule, while we have an information on the maximum permissible error ofmeasurement at our disposal. This fact is the main argument in favor of theapproach of L.V. Kantorovich.

The condition that the model describes the observed values leads to a systemof inequalities

|ηi| ≤ εi, i = 1,m, (3)

where εi − ith is the i-th measurement error. Numerical solution of the abobesystem involves the use as an initial approximation for at least one point, pro-viding the validity of all the relations (3). This point can be found by enforcingthe optimum condition for each criterion, which characterizes the agreementbetween the calculated and experimental data. For example, one such criterioncan be the sum of squared deviations.

A significant advantage of this approach is its capability to take into ac-count a priori constraints on the values of the parameters with the requireddependencies, known from additional sources that can significantly reduce theuncertainty of problems.

177

Algorithm for sparse approximate

inverse preconditioners refinement

in conjugate gradient method

Irina Surodina1 and Ilya Labutin2

1Institute of Computational Mathematics and Mathematical Geophysics6, Akademika Lavrentjeva, Novosibirsk, 630090, Russia

[emailprotected]. Trofimuk Institute of Petroleum Geology and Geophysics

3, Akademika Koptyuga, Novosibirsk, 630090, [emailprotected]

Keywords: sparse approximate inverse, conjugate gradient, GPU

The Conjugate Gradient (CG) algorithm is one of the best known iterativemethods for solving linear systems with symmetric, positive definite matrix [1].The performance of the CG can be dramatically increased with the suitablepreconditioner. The concept of the preconditioning in iterative methods is totransform the original system into an equivalent system with the same solution,but a lower condition number. However, the computational overhead of applyingthe preconditioner must not cancel out the benefit of fewer iterations [2, 3].

Modern parallel implementations of the preconditioned conjugate gradient(PCG) on the graphical processing units (GPUs) uses sparse approximate in-verse (AINV) preconditioners due to attractive features. First, the columns orrows of the approximate inverse matrix can be generated in parallel. Second,the preconditioner matrix is used in PCG through matrix-vector multiplicationswhich are easy to parallelize [3]. Thereby the accuracy of the inverse approxi-mation is important.

In this work, we present an algorithm for building a series of AINV precon-ditioners with arbitrary high approximation accuracy. Presented algorithm de-rives from the Hotelling-Schulz algorithm for inverse matrix elements correction[4, 5]. In this algorithm it is assumed that D0 is a certain initial approximationof the A−1. With the condition

‖R0‖ ≤ k < 1

whereR0= I−AD0

178

we can build the sequence:

D1=D0(I + R0), R1= I−AD1

D2=D1(I + R1), R2= I−AD2

. . . . . . . . . . . . . . .

Dm=Dm−1(I + Rm−1), Rm= I−ADm

It is shown that obtained sequence converges quickly to the A−1.Due to presented approach we refined existed Jacobi and Symmetric Suc-

cessive Over-Relaxation preconditioners [6] with reasonable approximation ac-

curacy. All algorithms were implemented on NVIDIATM

GPU and numericalresults obtained for real-life matrices.

References:

[1] R. Barrett, M. Berry, H. Van der Vorst, Templates for the Solu-tion of Linear Systems: Building Blocks for Iterative Methods,www.netlib.org/templates/templates.pdf

[2] J. Dongarra, I. Duff, D. Sorensen, H. Van der Vorst, Numeri-cal Linear Algebra for High-Performance Computers, SIAM, Philadelphia,PA, 1998.

[3] R. Li, Y. Saad, GPU-Accelerated Preconditioned Iterative Linear Solvers,Technical Report umsi-2010-112, Minnesota Supercomputer Institute, Uni-versity of Minnesota, Minneapolis, MN, 2010.

[4] H. Hotelling, Analysis of a complex of statistical variables into principalcomponents, Journal of Educational Psychology, 24 (1933), pp. 417–441and 498–520.

[5] G. Schulz, Iterative Berechnung der reziproken Matrix, Z. Angew. Math.Mech., 13 (1933), pp. 57–59..

[6] M. Ament, G. Knittel, D. Weiskopf, W. Strasser, A parallelpreconditioned conjugate gradient solver for the Poisson problem on amulti-GPU platform, in Proc. 18th Euromicro Conference on Parallel,Distributed and NetWork-Based Computing, Pisa, Italy, February 17-19,2010, pp. 583–592.

179

Computer-assisted error analysis

for second-order elliptic equations

in divergence form

Akitoshi Takayasu1,2 and Shin’ichi Oishi1,3

1Faculty of Science and Engineering, Waseda university2JSPS research fellow

3CREST/JST

3-4-1 Okubo, Shinjuku, Tokyo, 169-8555 [emailprotected]

Keywords: computer-assisted analysis, finite element method, constructivea priori error estimates

In this talk, a method of computer-assisted error estimate is proposed forsecond-order divergence form problems on a bounded domain Ω ⊂ RN (N =1, 2, 3) with the Dirichlet boundary condition:

−div(a(x)∇u) = f(x), in Ω,u|∂Ω = 0.

Assuming that f(x) ∈ L2(Ω) and a(x) ∈ W 1,∞(Ω), the solvability of the ellipticproblem with degenerate coercivity is shown. Here, degenerate coercivity meansthat there is a point x ∈ Ω satisfying a(x) = 0. Using a bilinear form, a weakformulation of the original problem is obtained.

Find u ∈ H10 (Ω) satisfying (a(x)∇u,∇v) = (f, v), ∀v ∈ H1

0 (Ω).

The solvability of the weak solution is related to the inf-sup condition, whichis sometimes called as LBB-condition in FEM theory [1,2,3]. Using verifiedcomputations, the lower bound of a value with respect to the inf-sup conditionis bounded. It is based on Fredholm’s alternative theorem.

After that let Vh ⊂ H10 (Ω) be a certain finite element subspace. Constructive

a priori error estimate is obtained for a certain orthogonal projection Rh :H1

0 (Ω)→ Vh (Ritz-projection) defined by

(a(x)∇(u −Rhu),∇vh) = 0, ∀vh ∈ Vh.

180

If conditions continuity and coercivity are satisfied, Cea’s lemma gives desireda priori error estimate: ‖u − Rhu‖H1

0≤ C(h)‖f‖L2. On the other hand, it

is difficult to obtain the error bound with degenerate coercivity. Our maintheorem gives the solvability and constructive a priori error estimate based onthe inf-sup condition. Further, convergence rate of the error estimate is analyzedwith computer-assistance. Computational results will be presented to show thesolvability and error estimates for the weak solution of original problems.

References:

[1] O. Ladyzenskaja, Mathematicheskie Voprosy Dinamiki Vyazkoi Neszhi-maemoi Zhidkosti, Gosudarstv. Izdat. Fiz. Mat. Lit., Moscow, 1961.

[2] A.K. Aziz and I. Babuska, Survey lectures on the mathematical foun-dations of the finite element method, In The Mathematical Foundations ofthe Finite Element Method with Applications to Partial Differential Equa-tions (A.K. Aziz, ed.), Academic Press, New York, 1972.

[3] F. Brezzi, On the existence, uniqueness, and approximation of saddle-point problems arising from Lagrangian multipliers, RAIRO Anal. Nu-mer., 8 (1974), No. 2, pp. 129–151.

181

On affinity of physical processes

of computing and measurements

Lev S. Terekhov1 and Andrey A. Lavrukhin2

1Omsk Division of Sobolev Institute of Mathematics,Siberian Branch of the RAS (OD IM SB RAS)

13, Pevtsova st., 644099 Omsk, [emailprotected]

2Omsk State Transport University (OSTU)35, Marx ave., 644046 Omsk, Russia

[emailprotected]

Keywords: natural interval, dynamic uncertainty relation, natural deriva-tive, numerical tests

Interval numbers and interval analysis [1–3] have been inspired primarily bycomputer calculation. In its turn, the computer calculation as a physical processborrows and inherits the interval structure of results of natural measurements.

Uncertainty relation (UR) of classical physics adequately describes the un-certainty intervals of natural measurements of a pair of mutually independentvariables. To adequately determine the uncertainty interval of natural measure-ments of a pair of variables related functional dependence, the generalization ofclassical UR was postulated [4]. The uncertainty ∆f of a measured dependentvariable f proposed as equal to the sum of random and deterministic dynamiccomponents. The generalization of classical UR which takes into account the dy-namic of physical process was called dynamic uncertainty relation (DUR) [5, 6].DUR generates an algorithm that provides the minimal uncertainty ∆fmin(∆t∗)of the measured dependent variable f when it is measured within the intervalwith optimal width ∆t∗ of the independent variable t. The interval ∆fmin is apotential accuracy of natural measurement and is not an artifact. The optimalinterval ∆t∗, hereinafter referred to as natural, is locally determined for each i-th

sample unit: ∆t∗i =(√µi−1 ·

∣∣∆fmin(∆t∗i−1)/∆t∗i−1

∣∣)−0,5, i = 2, 3, ... and is a

measuring and computing element of the adaptive algorithm. In addition to thecomputation of natural interval ∆t∗ for each sample unit the interval ∆fmin andinterval one-dimensional derivative ∆fmin/∆t

∗ are also calculated. The initialvalue of natural interval ∆t∗1 is calculated by other algorithms. Natural intervalis limited to the values ∆t∗ > 0 and its width ∆t∗ can not be arbitrarily reduced.Computing the derivative ∆fmin/∆t

∗ on an interval is free from the problem of

182

inaccuracy. The proposed process of measurement is adaptation of the width∆t of measurement interval to the derivative ∆fmin/∆t

∗ on this interval and tothe signal-to-noise ratio µ of the measured variable f at each sample unit. Indistinction to the known methods of natural measurements the proposed adap-tive algorithm combines samples of both natural measurements and computercalculations, carried out simultaneously as part of a single measurement-and-calculation process. The purpose of this work is to show, on examples, that thealgorithm originally found for natural measurements is identical, to within theaccepted approximation, to the algorithm of computer calculations.

Numerical tests of the proposed method with known methods carried out bycomparing the accumulated errors in the whole field of numerical integration.

For the test calculations was chosen the integral I =∫ 1

exp(−p)(1/x) dx = p.

Numerical integration of the method of trapezoids and average rectangles wasdone with optimal constant step, providing a minimum cumulative error overthe entire area of integration. The integration of adaptation is also realized byschemes trapezoids and rectangles. A comparison shows a decrease in few timesin the number of nodes by the method of adaptation compared to the classic,the accumulated error of the method of adaptation shows a decrease on 1–2orders of magnitude.

As shown by numerical tests, the computer calculations and the naturalmeasurements has affinity and structural identity to within the accepted ap-proximation.

References:

[1] G. Schroder, Differentiation of interval functions, Proceedings of theAmerican Mathematical Society, 36 (1972), No. 2, pp. 485–490.

[2] Y.I. Shokin, Interval Analysis, Nauka, Novosibirsk, 1981 (in Russian).

[3] S.P. Shary, Finite-dimensional Interval Analysis,www.nsc.ru/interval/Library/InteBooks/SharyBook.pdf (in Russian).

[4] L.S. Terekhov, On the complete error radio wave measurements of theinhomogeneous plasma layer, Geomagnetism and Aeronomy, 38 (1998),No. 6, pp. 142–148.

[5] L.S. Terekhov, On quantization of the uncertainty in measureable macro-scopic quantity, Russian Physics Journal, 49 (2006), No. 9, pp. 981–986.

[6] L.S. Terekhov, Construction of an analogue of interval values of thederivative, http://conf.nsc.ru/niknik-90/en/reportview/39121

183

Automatic code transformation

to optimize accuracy and speed

in floating-point arithmetic

Laurent Thevenoux1, Matthieu Martel2 and Philippe Langlois3

1 Univ. Perpignan Via Domitia, Digits, Architectures et LogicielsInformatiques, F-66860, Perpignan, France

2 Univ. Montpellier II, Laboratoire d’Informatique Robotique et deMicroelectronique de Montpellier, UMR 5506, F-34095, Montpellier, France3 CNRS, Laboratoire d’Informatique Robotique et de Microelectronique de

Montpellier, UMR 5506, F-34095, Montpellier, [emailprotected]

Keywords: floating-point arithmetic, accuracy, compensation, code trans-formation, instruction level parallelism, program optimization

Algorithms using IEEE-754 floating-point arithmetic [1] may suffer frominaccuracy generated by rounding since floating-point numbers are approxima-tions of real numbers. This inaccuracy is a critical matter in scientific computingas well as for embedded systems. Several techniques have been introduced andapplied to improve the accuracy of numerical algorithms, as for instance com-pensation or error-free transformations [2], . . . .

In practice these solutions are mainly known by experts and the correspond-ing program transformations must be implemented manually. Our objectiveis to allow the standard software developer to automatically transform his/hercode. This transformation is actually an optimization since we aim to take intoaccount two opposite criteria: accuracy and execution time. A first step towardsthis automatic optimization is presented in this work.

We propose to automatically introduce at the compile-time compensationsteps in (parts of) the floating-point computations. We present a tool to parseC codes and to insert compensated floating-point operations. A new C code isgenerated by replacing in the original code, floating-point operations (+,−,×)by their respective compensated algorithms: TwoSum, TwoProd, etc. [2]. Thesecompensated terms are computed and accumulated in parallel to the original op-erations. This provides a compensated computation that improves the accuracyof specific computing patterns.

184

We apply this approach to some test cases aiming to reproduce automati-cally what experts have done manually. In [3] for instance the authors proposea compensated polynomial evaluation. They evaluate the Horner form of thepolynomial p(x) = (0.75 − x)5(1 − x)11 close to its multiple roots. They showthat compensation improves the accuracy. The same results are generated byour automatic transformation as reported in Figure 1. This figure shows thevalue of p(x) close to one of its roots, before and after the automatic transforma-tion. As expected, original results are meaningless while the transformed codeprovides more accuracy and yields a smoother polynomial evaluation. Our toolallows non expert user to obtain automatically, quickly and easily such accuracyimprovement.

Figure 2: The leftmost graphshows p(x) around his root 1computed in double precisionwith the Horner algorithm.The rightmost graph shows theresults of the automaticallygenerated code.

The next step is to take into account the second optimization criteria: theexecution time. Instruction level parallelism (ILP) or instructions like the FMA(Fused Multiply-Add) can be exploited by modern architectures to save exe-cution time. Because we compute the compensated terms in parallel to theoriginal arithmetic expressions, our transformation introduces ILP that favors afast execution. This reduces the over-cost of these kind of transformation. Wecomplete the transformation tool with the automatic analysis of this over-cost.So these two aspects will be integrated in a future work in order to optimizecode. Time and accuracy criteria will be jointly optimized using trade-offs.

References:

[1] IEEE Standard for Binary Floating-point Arithmetic, Revision of Std 754-1985, 2008.

[2] T. Ogita, S.M. Rump, Sh. Oishi, Accurate Sum and Dot Product,SIAM J. Sci. Comput., 26 (2005).

[3] S. Graillat, Ph. Langlois, N. Louvet, Algorithms for accurate, va-lidated and fast polynomial evaluation, Japan Journal of Industrial andApplied Mathematics, 26 (2009), pp. 191–214.

185

Interval matrix multiplication

on parallel architectures

Philippe Theveny and Nathalie Revol

LIP (UMR 5668 CNRS - ENS de Lyon - INRIA - Universite Claude Bernard),Universite de Lyon

ENS de Lyon, 46 allee d’Italie69007 Lyon, France

[emailprotected], [emailprotected]

Keywords: interval arithmetic, matrix multiplication, parallel architec-tures

Getting efficiency when implementing interval arithmetic computations is adifficult task. The work presented here deals with the efficient implementationof interval matrix multiplication on parallel architectures.

A first issue is the choice of the formulas. The main principle we adopted con-sists in resorting, as often as possible, to optimized routines such as the BLAS3,as implemented in Intel’s MKL for instance. To do so, the formulas chosento implement interval arithmetic operations are based on the representation ofintervals by their midpoint and radius. This approach has been advocated byS. Rump [3] and used, in particular, in his implementation IntLab. It is recalledthat a panel of formulas for operations using the midpoint-radius representationexists: exact formulas can be found in A. Neumaier [1, pp. 22–23], S. Rump [3]gave approximate formulas with less operations, H.D. Nguyen [2] gave a choiceof formulas reaching various tradeoffs in terms of operation count and accuracy.These formulas for the addition and multiplication of two intervals are used by[2,3] in the classical formulas for matrix multiplication and can be expressed asoperations (addition and multiplication) of matrices of real numbers (either mid-points or radii), S. Rump recapitulates some such matrix expressions in [4]. Inthis presentation, the merits of each approach are discussed, in terms of numberof elementary operations, use of BLAS3 routines for the matrix multiplication,and of accuracy. The comparison of the relative accuracies are based on theassumption that arithmetic operations are implemented using exact arithmetic.We also give a comparison of these accuracies, assuming that arithmetic oper-ations are implemented using floating-point arithmetic.

186

A second issue concerns the adaptation to the architecture. Indeed, the ar-chitectures targeted in this study are parallel architectures such as multicoresor GPU. When implemented on such architectures, some measures such as thearithmetic operations count are no more relevant: the measured execution timesdo not relate directly to the operations count. This is explained by consider-ations on memory usage, multithreaded computations. . . We will show someexperiments that take these architectural parameters into account and reachgood performances. We will give some tradeoffs between the memory consump-tion and memory traffic: it can for instance be beneficial to copy (parts of) theinvolved matrices in the right caches to avoid cache misses and heavy traffic.

References:

[1] A. Neumaier, Interval Methods for Systems of Equations, CambridgeUniversity Press, 1990.

[2] H.D. Nguyen, N. Revol and P. Theveny, Tradeoffs between accuracyand efficiency for optimized and parallel interval matrix multiplication,PARA 2012.

[3] S.M. Rump, Fast and parallel interval arithmetic, BIT Numerical Math-ematics, 39 (1999), No. 3, pp. 539–560.

[4] S.M. Rump, Fast interval matrix multiplication, Numerical Algorithms,2011, 34 pages, to appear.

187

Fast infimum-supremum interval

operations for double-double arithmetic

in rounding-to-nearest

Naoya Yamanaka and Shin’ichi Oishi

Research Institute for Science and Engineering, Waseda University3-4-1 Okubo Shinjuku, Tokyo, 169-8555 Japan

naoya [emailprotected]

Keywords: double-double arithmetic, rounding-to-nearest, interval arith-metic

In a numerical calculation sometimes we need higher-than double-precisionfloating-point arithmetic to allow us to be confident of a result. One alternativeis to rewrite the program to use a software package implementing arbitrary-precision extended floating-point arithmetic such as MPFR [1] or ARPREC [2],and try to choose a suitable precision. There are intermediate possibilities in-termediate between the largest hardware floating-point format and the generalarbitrary-precision software which combine a considerable amount of extra pre-cision with a relatively speaking modest factor loss in speed. An alternativeapproach is to store numbers in a multiple-component format, where a numberis expressed as an unevaluated sums of ordinary floating-point words, each withits own significand and exponent. The multiple-digit approach can represent amuch larger range of numbers, whereas the multiple-component approach hasthe advantage in speed. Sometimes merely doubling the number of bits in adouble-floating-point fraction is enough, in which case arithmetic on double-double operands would suffice.

A double-double number is an unevaluated sum of two double precisionnumbers, capable of representing at least 106 bits of significand. A naturalidea is to manipulate such unevaluated sums. This is the underlying principleof double-double arithmetic. It consisted in representing a number x as theunevaluated sum of two basic precision floating-point numbers:

x = xh + xl

such that the significands of xh and xl do not overlap, which means here that

|xl| ≤ u |xh| ,

188

where u denotes the machine epsilon; in double precision u = 2−53.Meanwhile, the interval arithmetic is a method for finding lower and upper

bound on the value of a result by performing a computation in a manner thatpreserves these bounds. Thus it allows to develop numerical methods thatyield reliable results. The infimum-supremum interval arithmetic is a method offinding lower and upper bounds for the possible values of a result by performinga computation on a manner which preserves these bounds, and thus developingnumerical method that yield reliable results. Denote the set of intervals [x, x] :x, x ∈ R by IR. then provided 0 /∈ Y in the case of division, the result of thepower set operation X Y for X,Y ∈ IR is again an interval, and we have

X Y := [min(x y, x y, x y, x y), max(x y, x y, x y, x y)].

In this talk we will describe fast algorithms to compute interval operationsfor double-double arithmetic. These algorithms are working in rounding tonearest, so that they don’t need to take time for changing rounding mode. Thesealgorithms evaluate the rounding error of the approximate value in roundingto nearest mode, and find an interval represented by double-double numbersincluding the true interval.

References:

[1] L. Fousse, G. Hanrot, V. Lefevre, P. Pelissier, P. Zimmer-mann, MPFR: A multiple-precision binary floating-point library with cor-rect rounding, ACM Transactions on Mathematical Software (TOMS), 33(2007), No. 2, article 13, 15 pp.

[2] D.H. Bailey, Y. Hida, X.S. Li and B. Thompson, ARPREC: anarbitrary precision computational package, LBNL, Berkeley, 2002, 8 pp.,http://crd-legacy.lbl.gov/~dhbailey/dhbpapers/arprec.pdf

[3] Y. Hida, X.S. Li and D.H. Bailey, Quad-Double Arithmetic: Algo-rithms, Implementation, and Application, Report LBL-46996, October 30,2000, http://crd-legacy.lbl.gov/~xiaoye/TR_qd.ps.gz

[4] T. Ogita, S. M. Rump and S. Oishi, Accurate sum and dot product,SIAM J. Sci. Comput., 26 (2005), No. 6, pp. 1955–1988.

189

Interval polynomial interpolation

for bounded-error data

Ziyavidin Yuldashev, Alimzhan Ibragimov, Shukhrat Tadjibaev

National University of UzbekistanVuzgorodok.

100174, Tashkent, Uzbekistanziyaut, alim-ibragimov, [emailprotected]

Keywords: interval arithmetic, interval estimation, interval extension offunctions

We consider a function f(x), for which an interval extension f (x) is definedon [a, b]. Assume further that the intervals yi = f(xi) are defined for xi =[xi, xi] ⊆ [a, b], i = 1, 2, . . . , n, such that

f(xi) ∈ yi for any xi ∈ xi, i = 1, 2, . . . , n. (1)

The interpolation problem for the interval-valued function f (x) requires con-struction of an interval-valued function g(x) that satisfies

g(xi) = yi, i = 1, 2, . . . , n. (2)

The problem of determining the function g(x), under conditions (1)–(2)), willbe referred to as IIN1. Similar to the real case, this problem has no uniquesolution.

Let the points xi = [xi, xi] ⊆ [a, b], i = 0, 1, . . . , n, be such that

x0 = a, xi ∩ xj = ∅ for i 6= j, xn = b, (3)

and any real restriction of the function g(x) is a polynomial of the degree n:

Rs g(x) ∈ ∑n

i=0 aixi | ai ∈ R

. (4)

The problem (2)–(3)–(4) will be denoted as IIN2.Let the points xi = [xi, xi] ⊆ [a, b], i = 0, 1, . . . , n, be such that

widxi = widxj for i 6= j, (5)

in particular,

xi+1 − xi = xi+2 − xi+1 for i = 0, 1, . . . , n− 2, (6)

190

The problem (2)–(6) is designated as IIN3.In our work, we have investigated the above problems and verified the results

by numerical tests. In particular, for the solution of the problem IIN2, wepropose to use the function

g(x) = Ln(x) =n∑

k=0

yk

n∏

i6=j

x⊖ xj

xk ⊖ xj, (7)

where “⊖” means non-standard Markov subtraction, and any real restriction ofg(x) gives a Lagrange interpolation polynomial. We have proved

Theorem. For the function Ln(x), defined by (7), the conditions (1)–(2)are satisfied, and the following estimate is valid:

‖widLn(x)‖ ≤ M

(n+ 1)!

∥∥∥∥∥

n∏

i=0

(x− xi)

∥∥∥∥∥, (8)

where ‖[a, b]‖ = max|a|, |b|, M = maxx∈[a,b]

∣∣f (n+1)(x)∣∣.

Analogous results are also obtained for interval versions of the alternativeinterpolation formulae by Newton, Hermit and Chebyshev.

The interpolation interval polynomials constructed have been implementedand integrated into a scalable program system with an appropriate interface[1]. It enables one to compute the values of the interval interpolation formulaeby simple overloading of the corresponding interval operations to those from anecessary interval arithmetic [2].

References:

[1] Z.Kh.Yuldashev, A.A. Ibragimov, P.Zh.Kalhanov, A package ofinterval algorithms for general public. Registered in The state catalogue ofthe computer programs of Republic Uzbekistan, Certificate of official reg-istration of the computer programs No. DGU 02201, Tashkent, 5/19/2011.

[2] Z.Kh.Yuldashev, A.A. Ibragimov, P.Zh.Kalhanov, A program sys-tem for computing values of interval algebraically admissible expressionsin various interval arithmetics. Registered in The state catalogue of thecomputer programs of Republic Uzbekistan, Certificate of official registra-tion of the computer programs No. DGU 02202, Tashkent, 5/19/2011.

191

ANOVA, ANCOVA and time trends

modeling: solving statistical problems

using interval analysis

Sergei Zhilin

Altai State University61, Lenin ave., 656049, Barnaul, Russia

[emailprotected]

Keywords: linear regression, interval error, ANOVA, ANCOVA, time trend

Interval approach to regression analysis meets a wide variety of real worldapplications and can be competitive to traditional statistical methods becauseits basic hypotheses are simpler, and interval representations of uncertainty aremore natural for practitioners. Construction and analysis of linear regression

y = Xβ + ε (1)

with unknown but bounded error ε is a well-studied area. A number of authorspropose techniques for interval estimation of forecast and regression parameters,outlier detection, and experimental design for this model (e.g., [1] and referencestherein). In this work, we extend the interval approach to traditional statisticalproblems such as analysis of variance (ANOVA), analysis of covariance (AN-COVA), and time trend modeling in linear regression analysis.

ANOVA and ANCOVA refer to regression problems with qualitative predic-tors. The former assumes all the predictors are categorical, while the latter dealswith a mixture of quantitative and qualitative predictors. Qualitative predic-tors can be incorporated in the regression model (1) by introducing “dummy”variables [2].

A k-level qualitative predictor requires k − 1 dummy variables for its rep-resentation. One parameter is used to represent the overall mean effect or themean of some reference level, and other levels are coded by values of k− 1 vari-ables. The coding scheme of levels is not unique, and its choice should be basedon convenience of interpretation. The most popular scheme assumes dummyvariables are binary (equal 1 for a corresponding level and 0 for others). In sucha case, the coefficients of dummy variables act as supplemental interceptors andrepresent effects of switching to their levels. In statistics, coefficient estimation

192

is followed by performing statistical significance tests of the estimated parame-ters. Interpretations of diagnostic tests (F -test and t-test) rest heavily on themodel assumptions, and sometimes the results of tests are more difficult to in-terpret if the model’s assumptions are violated [3]. For example, if the errordoes not have a normal distribution, in small samples the estimated parametersdo not follow normal distributions and complicate inference.

Boundedness of the error allows one to obtain certain (not confidential) inter-val estimates of parameters βi which represent margins of effects and intervalsof possible values of the regression output y∗:

βi =

[minβ∈B

βi,maxβ∈B

βi

], y∗ =

[minβ∈B

X∗β,maxβ∈B

X∗β

],

where B =⋂N

i=1 β | |Xiβ − yi| ≤ ε , (Xi, yi) is a row in a table of observa-tions, and ε is upper bound of the error. Certain interval estimates do not needsignificance testing and may be interpreted explicitly by a researcher. In par-ticular, testing null hypothesis of zero difference of coefficients can be replacedby checking whether an interval parameter estimate contains zero. It is easy tofind the minimum value of the error bound ε∗ under which the samples remainconsistent with the model (B 6= ∅). The value of ε∗ is very important addi-tional information produced in the interval approach because it characterizesmodel precision and its relation to the dataset. We consider one of the simplestANOVA-type problems (fixed effects, one-way classification), but the proposedtechnique also is applicable to more complex variants of ANOVA and ANCOVA.

Constructing a regression equation which takes into account time trends isyet another important problem where dummy variables also are helpful. Thereare many variants of this problem but the main idea and technique remains thesame. Only the structure of the regression equation and the manner of dummy-coding of time moments may differ from one specific application to another.Using simple data sets from [2] we show how this technique can be used for theconstruction and analysis of a regression that takes into account two differenttime trends.

References:

[1] S.I. Zhilin, Simple method for outlier detection in fitting experimentaldata under interval error, Chemometrics and Intelligent Laboratory Sys-tems, 88, No. 1 (15 August 2007), pp. 60–68.

[2] N.R. Draper, H. Smith, Applied Regression Analysis, Wiley, 1981.

[3] L.L. Harlow, S.A. Mulaik, J.H. Steiger, What If There Were NoSignificance Tests? Lawrence Erlbaum Associates, London, 1997.

193

Repeated filtration of numerical results

for reliable error estimation

Vladimir Zhitnikov, Nataliya Sherykhalina and Sergey Porechny

Ufa State Aviation Technical University12, K. Marx str.

450000 Ufa, [emailprotected]

Keywords: reliable computing, numerical filtration, accuracy increase

Impressive successes in scientific calculations are achieved by methods ofinterval analysis. Nevertheless, there is a great number of developed numericalmethods and their implementation as computer programs, which either haveto be replaced almost completely by new methods to be able to benefit frominterval analysis, or have to be modified by methods post-processing results.

A method is proposed, that post-processes numerical results in order toprovide physical reliability of the obtained results along with errors estimations.Physical reliability can be achieved by the determination of an approximatevalue of the required parameter (and this value is called standard), its errorestimation (the indeterminacy interval) and by a final verification consisting inintersecting intervals obtained by different ways.

Let us consider a problem discretized using meshes (or grids) and let usvary the number of grid knots for different discretizations of the same problem.There is a finite set of results, each corresponding to a different mesh. Each ofthe obtained values can be considered as a multicomponent model [1], i.e. as thesum of the required value and a few components of the error. The importantfeature of such a representation is the presence of an unknown addend whichcan contain the remainder term, a roundoff error and other constituents due toboth the numerical method and the concrete program realization. In particular,the component due to roundoff errors does not tend to zero when the numberof mesh nodes increases, but it increases in most cases.

In order to estimate the error term, it is proposed to divide the problemof the determination of the required value into two separate ones. The firstsubproblem is the mathematical model identification of numerical experimentresults and the second subproblem is the test of obtained results with the helpof some known particular solutions or some other methods.

194

The first subproblem does not consist in the determination of the theoret-ical value of the required parameter. Rather, it consists in the decompositionof the result in constituents (components) on some known beforehand or exper-imentally determined basis. In this latter case, the components have anothermeaning, because it is known that the main components of the error along witha constant are not included in the unknown addend. This first subproblem canbe solved approximately by repeated numerical filtration. Filtration consistsin eliminating of error component by means of linear or nonlinear combinationof some results (as in Romberg, Aitken, Winn and other methods). Filtrationformula is determined by the type of basis and the rule of grid knots choice.Filtration provides an approximate value of the required parameter and error es-timation. In this work, we propose to separate the determination of the standardvalue and the estimation of the error. For this purpose, another filtration is con-ducted first. It proceeds by eliminating the standard value from the equationstaken in pairs, somehow as in Gaussian elimination. Then, further filtrationsyield estimations of the error independently of the standard value and choiceof the minimal one from the set of error estimations, or a combination of theones nearest to the minimum. Then the determination of the standard can beobtained by the filtration of the original system up to the number of the gridknots and filtration number corresponding to the minimum.

The second subproblem is testing. If some particular exact solution is known,this is the verification of whether it belongs to the obtained interval. It can alsoconsists in comparing with an approximate solution, that is obtained indepen-dently by another numerical method: in this case, verification is obtained byintersecting the intervals centered in the approximate solution and of radius theerror. This method based on additional information does not influence the for-merly obtained estimations, as they were obtained independently by filtration.It only confirms them or refutes them. The theoretical estimation of reliabil-ity (the confidence probability) of joint result of these two problems decision isobtained [2].

References:

[1] V.P. Zhitnikov, N.M. Sherykhalina, Modeling of Gravity Fluid Flowswith Using of Multicomponent Analysis Methods, Gilem, Ufa, 2009.

[2] V.P. Zhitnikov, N.M. Sherykhalina, Certainty estimation of numer-ical results at presence of several methods of solution of the problem,Computation Technologies, 4 (1999), No. 6, pp. 77–87.

195

Author index

Angelov, Todor: 13Arnault, Ioualalen: 64Aschemann, Harald: 37, 142, 144Aslonov, Kadir: 19Auer, Ekaterina: 15, 37, 77, 142

Badrtdinova, Fayruza: 17Bazarov, Mamurjon: 19Burova, Irina: 21

Cerny, Michal: 23Chapoutot, Alexandre: 25, 27Chausova, Elena V.: 35Chen, Chin-Yun, 29: 31Chesneaux, Jean-Marie: 111Chevrel, Philippe: 27

Dotschel, Thomas: 144Denis, Christophe: 111Didier, Laurent-Stephane: 25Dobronets, Boris S.: 33Dombrovskii, Vladimir V.: 35Dotschel, Thomas: 37Dronov, Vadim S.: 39Dzetkulic, Tomas: 41, 43

Fortin, Pierre: 45

Gaivoronsky, Sergey A.: 140Gatilov, Stepan: 47Golodov, Valentin A.: 134Gouicem, Mourad: 45Graillat, Stef: 45

Harin, Alexander: 49, 51Harlow, Jennifer: 53Hashemi, Behnam: 54Heimlich, Oliver: 57, 58

Hilaire, Thibault: 27Hladık, Milan: 60, 62Horacek, Jaroslav: 62

Ibragimov, Alimzhan: 190Ismagilova, Albina S.: 174

Jaulin, Luc: 66

Kantor, Olga G.: 151, 176Karpov, Maksim: 68Kashiwagi, Masahide: 70Kawamura, Akitoshi: 72Kearfott, Ralph Baker: 74Kersten, Julia: 144Khamisov, Oleg V.: 76Kiel, Stefan: 15, 77Kosheleva, Olga: 79Kostousova, Elena K.: 81Kramer, Walter: 83Kreinovich, Vladik: 79, 84, 158Kubica, Bart lomiej Jacek: 86, 88Kuleshov, Andrei: 146Kumkov, Sergey I.: 90Kupriianova, Olga: 93Kvasov, Boris I.: 95

Labutin, Ilya: 178Lakeyev, Anatoly V.: 97Lamotte, Jean-Luc: 111Langlois, Philippe: 184Lauter, Christoph: 93, 99Lavrukhin, Andrey A.: 182Liu, Xuefeng: 101Lyudvin, Dmitry Yu.: 103

196

Martel, Matthieu: 64, 184Menissier-Morain, Valerie: 99Mikushina, Yuliya V.: 90Miyajima, Shinya: 105, 107Molorodov, Yurii I.: 109Montan, Sethy: 111Morikura, Yusuke: 113Mouilleron, Christophe: 115Muller, Norbert: 72

Nadezhin, Dmitry: 117Najahi, Amine: 115Neher, Markus: 119Nehmeier, Marco: 57, 58Noskov, Sergey I.: 121

Ogita, Takeshi: 123, 129Oishi, Shin’ichi: 101, 113, 156, 180, 188Okayama, Tomoaki: 125Oskorbin, Nikolay: 127Otakulov, Laziz: 19Ozaki, Katsuhisa: 113, 129

Panov, Nikita V.: 168Panovskiy, Valentin N.: 131Panyukov, Anatoly V.: 133, 134Popova, Evgenija D.: 136Popova, Olga A.: 33Porechny, Sergey: 194Prolubnikov, Alexander: 138Pushkarev, Maxim I.: 140

Rada, Miroslav: 23Rauh, Andreas: 37, 77, 142, 144Reshetnyak, Alexander: 146Revol, Nathalie: 186Revy, Guillaume: 115Rosnick, Carsten: 72Rump, Siegfried M.: 148Ryabov, Gennady G.: 149

Sainudiin, Raazesh: 53Salakhov, Ilshat R.: 151Saraev, Pavel: 153Savchenko, Alexander O.: 155Sekine, Kouta: 156Semenov, Konstantin K.: 158Senkel, Luise: 144Sergeyev, Yaroslav D.: 160, 162Serov, Vladimir A.: 149Servin, Christian: 164Sharaya, Irene A.: 166Shary, Sergey P.: 103, 168Sherykhalina, Nataliya: 194Shilov, Nikolay V.: 170Solopchenko, Gennady N.: 158Spivak, Semen I.: 172, 174, 176Starichkov, Vladimir: 146Surodina, Irina: 178

Tadjibaev, Shukhrat: 190Takayasu, Akitoshi: 156, 180Terekhov, Lev S.: 182Thevenoux, Laurent: 184Theveny, Philippe: 186Tucker, Warwick: 53Tweedie, Craig: 164

Velasco, Aaron: 164Villers, Fanny: 25

Westphal, Ramona: 142Wolff von Gudenberg, Jurgen: 57, 58Wozniak, Adam: 88

Yamanaka, Naoya: 188Yuldashev, Ziyavidin: 190

Zhilin, Sergei: 117, 127, 192Zhitnikov, Vladimir: 194Ziegler, Martin: 72

197