Knot Insertion and Deletion Algorithms for B-Spline Curves and Surfaces
This page intentionally left blank
Knot Ins...
69 downloads
711 Views
19MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Knot Insertion and Deletion Algorithms for B-Spline Curves and Surfaces
This page intentionally left blank
Knot Insertion and Deletion Algorithms for B-Spline Curves and Surfaces
Edited by Ronald N. Goldman
Rice University Tom Lyche University of Oslo
Philadelphia
Society for Industrial and Applied Mathematics
Geometric Design Publications Editor Gerald E. Farin Arizona State University
Farin, Gerald E., editor, Geometric Modeling: Algorithms and New Trends (1987) Farin, Gerald E., editor, NURBS for Curve and Surface Design (1991) Barnhill, Robert E., editor, Geometry Processing for Design and Manufacturing (1992) Hagen, Hans, editor, Curve and Surface Design (1992) Hagen, Hans, editor, Topics in Surface Modeling (1992) Goldman, Ronald N., and Lyche, Tom, editors, Knot Insertion and Deletion Algorithms for B-Spline Curves and Surfaces (1993)
Cover art reprinted with permission from An Envelope Approach to a Sketching Editorfor Hierarchical Free-form Curve Design and Modification, Chapter 7 of this book.
Library of Congress Cataloging-in-Publication Data Knot insertion and deletion algorithms for B-spline curves and surfaces / edited by Ronald N. Goldman and Tom Lyche. p. cm. — (Geometric design publications) Includes bibliographical references and index. ISBN 0-89871-306-4 1. Geometrical drawing—Data processing. 2. Spline theory. 3. Knot insertion and deletion algorithms. 4. Blossoming (Mathematics) I. Goldman, Ron, 1952 - . II. Lyche, Tom. III. Series. QA497.K6 1993 511'.42—dc20
92-42080
Sponsored by SIAM Activity Group on Geometric Design. All rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the Publisher. For information, write the Society for Industrial and Applied Mathematics, 3600 University City Science Center, Philadelphia, PA 19104-2688. 1993 by the Society for Industrial and Applied Mathematics. is a registered trademark.
Preface
Knot insertion and deletion algorithms are the two fundamental procedures needed for understanding, analyzing, and rendering B-spline curves and surfaces. This approach to splines, however, is not traditional; originally divided differences were used to develop almost all of the theory of univariate B-splines [4]. This point of view began to change in the mid-1970s with the publication of the Cox-de Boor-Mansfield recurrence [3]. Many computer scientists and engineers not familiar with divided differences wanted to apply splines to computer graphics and computer aided design; this recurrence provided a much easier entry for them into the subject. Since this recurrence is numerically stable, it lends itself very naturally to computer programming, and therefore, as well, to computer aided design. Knot insertion algorithms were originally developed by the computer aided design community. These procedures grew out of a need for algorithms to render and intersect B-spline curves and surfaces. Divide and conquer is a standard paradigm in computer science, and subdivision is its incarnation in the study of free-form curves and surfaces. Subdivision had been applied successfully in the analysis of Bezier curves and surfaces [12]. A similar technique was clearly desirable for B-spline curves and surfaces, and this motivated the research for knot insertion procedures. Since Bezier curves and surfaces are just a special case of B-spline curves and surfaces with multiple knots, general knot insertion procedures would allow users to transform Bspline curves and surfaces into piecewise Bezier form and then to apply standard Bezier subdivision techniques. The first knot insertion procedures discovered was Chaiken's algorithm [8], [18] for uniform quadratic B-splines; this algorithm was later generalized by Lane and Riesenfeld [12] to uniform B-splines of arbitrary degree. Recently, Goldman and Warren [11] discovered an extension of this algorithm to B-splines of arbitrary degree with knots in geometric or affine progression. Sablonniere's algorithm [19] for converting from B-spline to piecewise Bezier form can also be considered a special kind of knot insertion procedure. General knot insertion algorithms for B-spline curves and surfaces of arbitrary degree with arbitrary knots were originally developed by two groups v
vi
Preface
simultaneously: in 1980 Boehm [2] discovered his knot insertion algorithm, and in the same year Cohen, Lyche, and Riesenfeld [9] collaborated on the Oslo algorithm. In Boehm's algorithm, the basic step is inserting a single knot; in the Oslo algorithm, the fundamental step is computing a single new control point. Both of these algorithms are grounded in the basic recurrence for the Bsplines, though the original proofs still made extensive use of divided difference techniques. As a bonus, Lane and Riesenfeld [13] showed how to use knot insertion to provide a simple and elegant proof of the variation diminishing property for B-spline curves. Thus, knot insertion proved to be not only a very practical tool for rendering and intersecting curves and surfaces, but also a powerful technique for theoretical analysis. A rivalry ensued concerning the efficiency of these two basic knot insertion algorithms. Soon the speed of the Oslo algorithm was improved by Morken and Lyche [14] and later another fast variant was provided by Goldman [10]. Two further developments drew attention away from divided differences and towards further reliance on the B-spline recurrence. De Boor and Hollig [6] published a paper on B-splines Without Divided Differences, whose theme was that the entire theory of univariate B-splines could be developed using only the de Boor-Fix dual functionals [5] together with the B-spline recurrence. This refocused attention on the dual functionals and presaged a development which soon became known as the blossoming approach of Rams haw [15]-[17] or the polar form approach of de Casteljau [7]. Blossoming or polar forms is an approach to dual functionals which is more directly tied to the B-spline recurrence than the de boor-Fix formula. It soon became clear that the B-spline recurrence, blossoming, and knot insertion were all intimately related. Seidel [21] rederived Boehm's knot insertion algorithm and Goldman [10] developed a variant of the Oslo algorithm using blossoming. Knot insertion adds knots in; knot deletion takes them out. Knot insertion is exact; knot deletion approximate. Knot insertion tells us how to represent exactly a given curve or surface with additional knots; knot deletion tells us how well we can approximate a given curve or surface with fewer knots. Both are important theoretical tools with many useful applications. Knot insertion introduces more data which can be used to provide piecewise linear approximations for rendering and intersection algorithms; knot deletion compresses the data, providing more compact representations, which may lead, in turn, to faster algorithms. In the fall of 1989, a SIAM conference on Computer Aided Geometric Design was held in Tempe, Arizona. At this time, Phil Barry and Ron Goldman were reexamining algorithms for B-spline curves from a blossoming point of view, and Knut Morken and Tom Lyche were deeply into their ongoing study of knot insertion and deletion algorithms for B-spline curves and surfaces. Considering these algorithms to be the basic procedures for univariate Bsplines, we collectively decided to combine our efforts to produce a single book covering as many aspects as we could on Knot Insertion and Deletion
Preface
vii
Algorithms for B-spline Curves and Surfaces. The purpose of this book is to provide new material on knot insertion and deletion as well as a fresh look at old results. To include not only theoretical material, but also a very concrete practical application, we asked Elaine Cohen to contribute a chapter on applications to curve sketching in computer aided geometric design. Goldman and Barry came to the conclusion that many well-known algorithms for B-spline curves and surfaces, and for their local polynomial generalizations called progressive curves, were simply variations of knot insertion procedures. These algorithms include recursive evaluation, differentiation, and integration. They also discovered some new variants of knot insertion procedures. These ideas are investigated in the first four chapters of this book. "Algorithms for Progressive Curves" extends B-spline and blossoming techniques to three important bases—the monomial, power, and Newton dual bases—and shows how to understand evaluation, differentiation, and integration from the knot insertion and blossoming points of view. "Factored Knot Insertion" explains how to apply the Newton dual basis, introduced in the previous chapter, to provide an alternative approach to knot insertion that can sometimes speed up the process. This factored approach bases knot insertion on recursive differentiation rather than recursive evaluation and is closely related to a similar approach recently developed independently by Sankar, Silbermann, and Ferrarri [20]. "Knot Insertion Algorithms" is our attempt to explain, compare, and contrast many of the possible alternative approaches to knot insertion and to unify these disparate techniques using the common perspective of blossoming. Thus, the unifying themes of these three chapters is the examination of the various aspects of knot insertion from the modern blossoming point of view. To keep the book self-contained, "An Introduction to Blossoming " is provided first to initiate readers into this central subject. Morken and Lyche approached some of these same problems with different motivation and different techniques. Given two spline spaces with distinct knot sequences, they wanted to know how to best represent a curve in one space by a curve in the other space. If one knot sequence is a subset of the other, then this is just the knot insertion problem; if not, then this problem has aspects of both knot insertion and knot deletion. In their first chapter in this volume, Morken and Lyche discuss the general problem of "Conversion Between BSpline Bases Using the Generalized Oslo Algorithm." Their main tools are the discrete B-Splines and the de Boor-Fix dual functionals. This chapter provides a different perspective from the chapters by Barry and Goldman, which focus on the change of basis problem from a blossoming point of view. Numerical stability issues are critical for implementation. The second chapter by Morken and Lyche focuses on these issues for knot insertion and deletion algorithms. In particular, they ask the question: "How Much Can the Size of the B-Spline Coefficients Be Reduced by Inserting a Knot?" Their answer has implications for the stability of both knot insertion and deletion algorithms for B-spline curves and surfaces.
viii.
Preface
We close this volume with yet another perspective on knot insertion and knot deletion. In their chapter "An Envelope Approach to a Sketching Editor for Hierarchical Free-form Curve Design and Modification," Mike Banks, Elaine Cohen, and Tim Mueller discuss techniques for creating and modifying freeform curves by mimicking the natural actions used in ordinary freehand sketching. Here knot insertion and deletion are used in a different sense to modify the knot vector of a B-spline curve in order to conform more closely to the intent of the designer. Knot insertion and deletion are rich themes that we feel lie at the heart of much of the theory and application of univariate B-splines. This volume is an attempt to look at these themes from various points of view, but it is far from the last word on this subject. Recently, attempts have been made by several people to extend knot insertion algorithms to geometrically continuous splines [1], [22], and the connection between knot insertion, dual functionals, and blossoming is beginning to become clear in this setting. Extensions of knot insertion algorithms, dual functionals, and blossoming techniques to multivariate splines are now under investigation, but at this time the subject is not well understood. Perhaps this topic will be the subject of some future volume. It has taken us rather longer than we had hoped to produce this book. Although we started writing these papers in the autumn of 1989, we did not finish our work until almost the very end of 1991. Nevertheless, we think that these ideas are still timely, and we trust that these topics are of ongoing interest to a large segment of the community devoted to research and development of splines for use in computer graphics and computer aided design. We can only hope that the readers learn as much from reading these papers as the authors did from writing them. We are grateful to the following people for refereeing: Elaine Cohen Tony DeRose Tor Dokken Knut Morken Hartmut Prautzsch Kyrre Strom Phil Barry Hans-Peter Seidel Ron Goldman December 31, 1991
Preface
ix
References [1] P. J. Barry, R. N. Goldman, and C. A. Micchelli, Knot insertion algorithms for geometrically continuous splines, 1991, in preparation. [2] W. Boehm, Inserting new knots into E-spline curves, Comput. Aided Design, 12 (1980), pp. 199-201. [3] C. de Boor, On calculating with E-splines, J. Approx. Theory, 6 (1972), pp. 50-62. [4] , A Practical Guide to Splines, Springer-Verlag, New York, 1972. [5] C. de Boor and G. Fix, Spline approximation by quasi- interpolants, J. Approx. Theory, 8 (1973), pp. 19-45. [6] C. de Boor and K. Hollig, E-splines without divided differences, in Geometric Modeling: Algorithms and New Trends, G. Farin, ed., Society for Industrial and Applied Mathematics, Philadelphia, PA, 1987, pp. 21-27. [7] P. de Casteljau, Formes a Poles, Hermes, Paris, 1985. [8] G. Chaiken, An algorithm for high speed curve generation, Comput. Graphics and Image Processing, 3 (1974), pp. 346-349. [9] E. Cohen, T. Lyche, and R. Riesenfeld, Discrete E-splines and subdivision techniques in computer-aided geometric design and computer graphics, Comput. Graphics and Image Processing, 14 (1980), pp. 87-111. [10] R. N. Goldman, Blossoming and knot insertion algorithms for B-spline curves, Comput. Aided Geometric Design, 7 (1990), pp. 69-81. [11] R. N. Goldman and J. Warren, An extension of Chaiken's algorithm to E-spline curves with knots in geometric progression, 1991, submitted for publication. [12] J. Lane and R. Riesenfeld, A theoretical development of the generation and display of piecewise polynomial surfaces, IEEE Trans. Pattern Analysis and Machine Intelligence, 2 (1980), pp. 35-46. [13] , A geometric proof for the variation diminishing proprerty of E-spline approximation, J. Approx. Theory, 37 (1983), pp. 1-4. [14] T. Lyche and K. Morken, Making the Oslo algorithm more efficient, SIAM J. Numer. Anal., 23 (1986), pp. 663-675. [15] L. Ramshaw, Blossoming: A Connect-the-Dots Approach to Splines, Tech. Report 19, Digital Equipment Systems Research Center, Palo Alto, CA, 1987. [16] , Beziers and E-splines as multiaffine maps, in Theoretical Foundations of Computer Graphics and CAD, R. A. Earnshaw, ed., NATO ASI Series F, 40, Springer-Verlag, New York, 1988, pp. 757-776. [17] , Blossoms are polar forms, Comput. Aided Geom. Design, 6 (1989), pp. 323358. [18] R. Riesenfeld, On Chaiken's algorithm, Comput. Graphics and Image Processing, 4 (1975), pp. 304-310. [19] P. Sablonniere, Spline and Bezier polygons associated with a polynomial spline curve, Comput. Aided Design, 6 (1978), pp. 257-261. [20] P. V. Sankar, M. J. Silbermann, and L. A. Ferrari, Curve and surface generation and refinejnent based on a high speed derivative algorithm, submitted for publication. [21] H.-P. Seidel, Knot insertion from a blossoming point of view, Comput. Aided Geom. Design, 5 (1988), pp. 81-86. [22] , Geometric constructions and knot insertion for /3-splines of arbitrary degree, submitted for publication.
This page intentionally left blank
List of Contributors
Michael J. Banks, Hewlett-Packard Fort Collins, Mail Stop 74, 3404 E. Harmony Rd., Fort Collins, CO 80525 Phillip J. Barry, Department of Computer Science, University of Minnesota, Minneapolis, MN 55455 Elaine Cohen, Department of Computer Science, University of Utah, Salt Lake City, UT 84112 Ronald N. Goldman, Department of Computer Science, Rice University, Houston, TX 77251 Tom Lyche, Institutt for Informatikk, University of Oslo, Blindern 0316, Oslo 3, Norway Knut M0rken, Institutt for Informatikk, University of Oslo, Blindern 0316, Oslo 3, Norway Timothy I. Mueller, Department of Computer Science, University of Utah, Salt Lake City, UT84112 Kyrre Str0m, Institutt for Infonnatikk, University of Oslo, Blindern 0316, Oslo 3, Norway
XI
This page intentionally left blank
Contents
1
Chapter 1 An Introduction to Blossoming Phillip J. Barry
11
Chapter 2 Algorithms for Progressive Curves: Extending B-Spline and Blossoming Techniques to the Monomial, Power, and Newton Dual Bases Phillip J. Barry and Ronald N. Goldman
65
Chapter 3 Factored Knot Insertion Phillip J. Barry and Ronald N. Goldman
89
Chapter 4 Knot Insertion Algorithms Phillip J. Barry and Ronald N. Goldman
135
Chapter 5 Conversion Between B-Spline Bases Using the Generalized Oslo Algorithm Tom Lyche, Knut M0rken, and Kyrre Str0m
155
Chapter 6 How Much Can the Size of the B-Spline Coefficients Be Reduced by Inserting One Knot? Tom Lyche and Knut M0rken
179
Chapter 7 An Envelope Approach to a Sketching Editor for Hierarchical Free-form Curve Design and Modification Michael J. Banks, Elaine Cohen, and Timothy 1. Mueller
195
Index
Xlll
This page intentionally left blank
CHAPTER
J_ An Introduction to Blossoming Phillip J. Barry
1.1.
Introduction
B-spline curves and surfaces are not only practically useful, but also theoretically elegant. One reason for this is the many diverse approaches to B-splines. B-spline curves and surfaces have been defined and studied through divided differences, convolutions, recursion formulas, dual functionals, and many other approaches. One recent approach is blossoming (also known as polar forms). While all B-spline approaches have advantages, blossoming is a particularly fruitful approach to many aspects of B-splines, including knot insertion. The next three chapters in this book rely heavily on the blossoming approach to B-spline curves and surfaces. Two of these chapters use blossoming in presenting knot insertion algorithms, the other extends blossoming and knot insertion techniques to a class of curves closely related to B-spline curves. Each of these chapters is intended to be self-contained, and therefore contains some minimal background material on blossoming. However, because of the heavy reliance of these chapters on blossoming, readers unfamiliar with this approach may desire a more detailed background. This chapter, then, is a brief introduction to blossoming. More specifically, this chapter has three purposes: to define what blossoming is, to give examples of how it is used, and to motivate why it is useful. The remainder of this chapter is structured as follows: first, in §1.2 we provide a few remarks on why blossoming is a valuable approach to B-spline curves. We next define the blossom in §1.3, and in §1.4 we give some simple examples illustrating the use of blossoming in discussing B-spline algorithms. Section 1.5 covers a variant of the blossom which is particularly useful when working with derivatives, and §1.6 contains concluding remarks. This chapter is intended to be an introduction, rather than a comprehensive discussion, of blossoming. Readers interested in further details are referred to [5], [7]-[9]. In particular, this chapter relies heavily on [7]. Readers interested in still other research involving blossoming are referred to [2], [7], [10], [11], where blossoming is extended to surfaces and to geometrically continuous splines. 1
2 1.2.
Knot Insertion and Deletion Algorithms Motivation
Blossoming is closely related to other B-spline approaches—most particularly to knot insertion algorithms and to the de Boor-Fix dual functionals [1], [4]; however, in certain contexts it is more efficient or elegant. One reason for this is the relationship between a B-spline curve and its blossom. As we shall see, the blossom is a single construct which encompasses the curve itself, its derivatives, its control vertices, and any new control vertices that may arise from knot insertion. Moreover, the properties of the blossom make clear the relationship between each of these entities. These two observation enable blossoming to provide straightforward explanations of many B-spline algorithms: both the input to, and output from these algorithms is simply expressed in terms of the blossom, and explaining the algorithms only requires using the properties of the blossom to relate the input and output. Below we will give a few specific examples of using blossoming to explain B-spline algorithms, but first we define the blossom. 1.3. Definition of the Blossom
The core idea in blossoming is that there is a one-to-one correspondence between polynomials of degree at most n, and a certain class of polynomials of n variables. While going from a function of a single parameter to a function of several variables may seem like a step in the wrong direction, the properties of these multivariate functions will actually make discussion of many aspects of B-spline curves less, rather than more, complicated. Let G(i) be a polynomial curve of degree n or less. The blossom BG(UI, ..., Un) of the polynomial G(i) is the unique multivariate polynomial with the following properties: • multiaffine: for all indices i and all real numbers c
symmetry: for any permutation TT of {1,2,..., n}
dicigonal: BQ reduces to G when evaluated on its diagonal; i.e,
Before proceeding further, consider a simple example. Let G(i) = 5t3 — 6t2 + 8t — 31. The blossom is a function of three variables ^1,^2,^3. The
Introduction to Blossoming
3
multiaffine property is equivalent to no ul appearing to more than the first power [7]. We therefore get
for some values at,J}k- Because BQ is symmetric we must have «i,i,o = a i,o,i = a o,i,i aild a i,o,o = fto,i,o — ^0,0,1- Using the diagonal property and equating the coefficients of G with the remaining unknown coefficients of BQ yields
We next extend the blossom to piecewise polynomials. The blossom of a piecewise polynomial will also be piecewise, with each polynomial segment of the curve having a corresponding blossom segment. Moreover, if the polynomial segments of the curve join smoothly, as they will in the case of B-spline curves, then the blossom will have a related property. Let G(t) be a B-spline curve with knots {tz} and control vertices {V;}, and suppose tt has multiplicity // in the knot vector. Denote by #G,L the segment of the blossom corresponding to the interval which ends at £ z , and by #G,R the segment of the blossom corresponding to the interval immediately to the right of tt. Then
for all choices of t t i , . . ., w ?l _ M [7], The multiaffine, symmetry, and diagonal properties serve to characterize the blossom, and to show the close connection, given by the diagonal property, between the curve and the blossom; however, they do not explicitly link the blossom to the curve's B-spline representation. This is done by a fourth property, called the dual functional property: dual functional: any control vertex
where the blossom segment used is the segment corresponding to any of the intervals [i,,i z + 1 ),.. ., [tl+n,tl+n+] )• In other words, we can obtain a control vertex by taking any curve segment affected by that vertex, and evaluating the corresponding blossom segment at the appropriate n consecutive knots. The dual functional property applies not only to the original knot vector, but also to any refinement of that knot vector. If {u{} is a refinement of |t t ), then the control vertices {Wl} expressing the curve G(t] over the refined knot vector are
4
Knot Insertion and Deletion Algorithms
where the blossom segment used is any segment corresponding to an interval [ipij+i) such that ( t j , t j + i ) has a nonempty intersection with (w z -, w z - +n+ i). That is, the new control point W{ must affect the curve over some subsegment of (ij,£j_|_i). The dual functional property is crucial in our use of blossoming in the next few chapters, since it expresses both the new and the old control vertices in terms of the blossom. In fact, there is a natural restriction of the arguments for the blossom so that the range of the blossom is precisely the set of new control points that may arise through knot insertion. In other words, with this restriction on the arguments, every point in the range of the blossom will be a point that, if we were to insert the right knots, would be a new control point of the curve. Consider the blossom segment corresponding to the knot interval [i;,i;+i). We restrict the arguments to this blossom segment to arguments u\,..., un which (1) are consecutive values in some valid refinement of the original knot vector, and, (2a) if they contain a value less than t{ must have ti appearing in the argument list at least as often as it appears in the original knot vector, and, (2b) if they contain a value greater than tl+i must have tl+\ appearing in the argument list at least as often as it appears in the original knot vector. (See p. 86 of [7] for further details). Restriction (1) ensures that some blossom segment, when evaluated at the specific arguments, will yield a potential new control vertex, and restrictions (2a) and (2b) ensure that the blossom segment corresponding to [tf;,^+i) is one segment that does so. One added benefit of this restriction is that we do not have to keep specifying which blossom segment is under consideration. This is because a given argument will only be valid for certain segments, and each of these segments evaluated at the arguments will yield the same value, by (1.1). 1.4. Uses of the Blossom In this section we give a few examples of using the blossom to explain algorithms. While the examples given in this section are elementary, the techniques employed here are similar to the techniques used when explaining more involved algorithms in later chapters. Let us consider first using the de Casteljau algorithm (see, e.g., [6]) to subdivide cubic Bezier curves. Bezier curves are known to be a special, polynomial type of B-spline curve with the knot vector given by n knots at 0 and n knots at 1. (Since each degree n B-spline is actually defined by '2n + 2 knots, Bezier curves are sometimes presented as having a knot vector with n-\-1 knots at 0 and another n + 1 at 1; however, because Bezier curves are defined only over a single interval, only n knots at each value are needed.) By the dual functional property the control vertices Pi for the Bezier representation of the curve are given in terms of the blossom by
where 0 appears as an argument n — i times and 1 appears i times.
Introduction to Blossoming
5
FIG. 1.1. The de Casteljau algorithm for a (cubic) Bezier curve. The. values at the. base of the triangle, are original control vertices, those of the left side are control vertices for the curve segment over [0,a], and those on the right for the curve segment over [a, 1]. Each computed point is found by taking the two points immediately below it, multiplying them by the respective labels on the edges incident to the point being found, and summing the two resulting products.
For the sake of simplicity, consider the cubic case. We begin with the control vertices # G (0,0,0), £ G (0,0,1)> #c;(0,1, l),and £ G (1,1,1). From these we wish to get new control vertices representing the curve over segments [0,a] and [a, 1], respectively, as Bezier curves. The new representation can be viewed as a B-spline curve with knot vector consisting of three values, 0,a, 1, each having multiplicity n. The new control vertices are therefore # G (0,0,0), £ G (0,0,a), £ G (0,a,a), # G (a,a,a), £ G ( a , a , l ) , BG(a, 1,1), and B G (1,1,1). The question is now how to find all the new control vertices from the old control vertices. Up to this point we have not made extensive use of the multiaffine property. One power of this property is that it allows us to derive new blossom values from old ones. For example, consider finding the value 5 G (0,0,a). Note
expressing one of the new control vertices in terms of two old control vertices. In fact, if we take all consecutive pairs of the old control vertices, and combine them in the ratio 1 — a : a, then we get the values £? G (0,0,a), # G (0,a, 1), and Ba(a, 1,1). Note the first and last of these are desired new points. If we then take these three points and similarly combine them in the ratio 1 — a : a, we get the new points £? G (0,a,a) and # G (a,a, 1), and combining these yields f? G (a,a,a). This process is illustrated diagrammatically in Fig. 1.1 and graphically in Fig. 1.2. Referring to Fig. 1.1, observe that the edge labels in that figure are redundant—they follow from the labels at the nodes and the multiaffine property. Also, note that the process just described
6
Knot Insertion and Deletion Algorithms
FlG. 1.2. Applying the de Casteljau algorithm to a cubic Bezier curve with a = .6. To conserve space, the points are labelled using only the blossom arguments.
actually evaluates the curve by the standard de Casteljau algorithm. Thus, this algorithm is an example of how the blossom unites the original control vertices, the curve itself, and the new control vertices. As a second example, consider the de Boor algorithm [3] for evaluating a (cubic) B-spline curve at a value a £ [tq,tq+i]. The input to the algorithm, expressed in terms of the blossom, is the control vertices Bc<(tq-2itq-i,tq), Bo(tq-\, tq,tq+i), BG(tq,tq+i, ^+2), BG(tq+i, ^+2,^+3) governing the segment of the curve over [tq,tq+i). The output will be the point Bo(a,a,a} on the curve. We can get from the original control vertices to the point on the curve by a set of affine combinations as shown in Figs. 1.3 and 1.4. Note the exact combinations used are not explicitly given in Fig. 1.3, but follow from the blossom representation of the points involved and the multiafrme property. For example, consider finding BG(tq-i,tq,a) from BG(tq-2,tq-i,tq), and
(the last line follows by the symmetry property). Many other B-spline algorithms, including knot insertion algorithms, have the same general structure as the two algorithms discussed here: they consist of successive affine combinations of pairs of points, beginning with the original
Introduction to Blossoming
7
FIG. 1.3. The de Boor algorithm for evaluating a (cubic) B-splme curve.
control vertices. The method for deriving these other algorithms in terms of blossoming also consists of using the dual functional property to cast the input and output in terms of the blossom, identifying the sequence of intermediate blossom values that allows one to go between the input and output by a sequence of affine combinations, and then using the multiaffine and symmetry properties to identify specifically the coefficients in those combinations. Note that the first and last steps are straightforward, so in the blossoming approach the challenge is in identifying the intermediate blossom values. 1.5.
The Multilinear Blossom
Thus far we have discussed how the blossom encompasses the curve, its original control vertices, and any new control vertices which may arise through knot insertion. In this section we show how the blossom also encompasses derivatives. This requires a slightly different form of the blossom than is used above. The form of the blossom presented in §1.3 is sometimes called the "multiaffine blossom." In this section we discuss another variant—the "multilinear blossom." Because the multilinear blossom is discussed at length in a later chapter, our discussion of it here will be brief. We will give its definition, mention its connection with derivatives, and provide an example of its use. We will let Be; denote both the multilinear and multiaffine blossom. Because the multilinear blossom's arguments differ from those of the multiaffine blossom, no confusion should result. We will also not explicitly mention which segment of the blossom is under consideration, since this should be clear from the context. The multilinear blossom depends on n pairs ( u ^ . u ^ ) , and has the following properties: multilinear: for all indices i and all real numbers c\,c-2
8
Knot Insertion and Deletion Algorithms
FIG. 1.4. Applying the de Boor algorithm to a cubic B-spline curve. The curve is over the knot vector ( 0 , 0 , 0 , 2 , 4 , 6 , 8 , 1 0 ) , and is evaluated at a = 5. To conserve space, the points are labelled using only the blossom arguments.
Introduction to Blossoming
9
symmetry: for any permutation TT of { 1 , 2 , . . .,??,}
diagonal: The multilinear blossom also has a dual functional property: dual functionals:
The link between the multiaffine and multilinear blossom is given by This equation allows us to represent any multiaffine blossom value as a multilinear blossom value, and any multilinear blossom value with all W{ = 1 as a multiaffine blossom value. Next we consider the link between derivatives and the multilinear blossom. Let a G [/;,^+i). Then
where the blossom segment used is the one corresponding to the interval [t^tl+i). and (a, 1) appears as an argument n — k times, and ( 1 , 0 ) appears as an argument k times [7]. As an example of using the multilinear blossom, consider finding the first derivative of a cubic B-spline curve at a € \tq,tq+\) using an algorithm in [3]. This algorithm can be explained using blossoming, as is shown in Fig. 1.5. We begin with the control points for the segment of the spline over [i g , / f / + i). By the dual functional property these are the values at the base of the diagram. Then, in the first level of computations in the triangle, we introduce the argument ( 1 , 0 ) . This is done by using the multilinear and symmetry properties. Consider, for example, finding the value Ba[(tq-i, 1), (/ ( / , 1), ( 1 , 0)]. Weget
In each of the next two levels of computation the argument (a, 1) is introduced. The resulting value BG((a, 1), (a, 1), (1, 0))is G^(a)/^ by (1.5).
10
1.6.
Knot Insertion and Deletion Algorithms
Summary
In this chapter we have given the definition of the blossom, mentioned why it useful, and provided a few examples. This discussion will be continued and expanded in the next few chapters, where blossoming will be the primary tool we use in discussing knot insertion. References [1] P. J. Barry, De Boor-Fix functional and polar forms, Comput. Aided Geom. Design, 7 (1990), pp. 425-430. [2] P. J. Barry, R. N. Goldman, and C. A. Micchelli, Knot insertion algorithms for piecewise polynomial spaces determined by connection matrices. Adv. Comput. Math. (1993), to appear. [3] C. de Boor, On calculating with B-splines, J. Approx. Theory, 6 (1972), pp. 50-62. [4] C. de Boor and K. Hollig, ^-splines without divided differences, in Geometric Modeling: Algorithms and New Trends, G. Farin, ed., Society for Industrial and Applied Mathematics, Philadelphia, PA, 1987. [5] P. de Casteljau, Shape Mathematics and CAD, Kogan Page Ltd., London, 1986. [6] G. Farin, Curves and Surfaces for Computer Aided Geometric Design: A Practical Guide, Academic Press, San Diego, CA, 1988. [7] L. Ramshaw, Blossoming: A Connect-the-Dots Approach to Splines, Tech. Report 19, Digital Equipment Systems Research Center, Palo Alto, CA, 1987. [8] , Blossoms are polar forms, Comput. Aided Geom. Design, 6 (1989), pp. 323358. [9] H.-P. Seidel, A new multiaffine approach to splines, Comput. Aided Geom. Design, 6 (1989), pp. 23-32. [10] , Polar forms for geometrically continuous spline curves of arbitrary degree, ACM Trans, on Graphics, 12 (1993). [11] , Algorithms for E-patches, Proceedings Graphics Interface '91, (1991), pp. 815.
CHAPTER
2 Algorithms for Progressive Curves: Extending B-Spline and Blossoming Techniques to the Monomial, Power, and Newton Dual Bases Phillip J. Barry and Ronald N. Goldman
2.1.
Introduction
Many well-known algorithms for B-spline curves are local recursive procedures. These include de Boor's evaluation algorithm [10], Ramshaw's blossoming algorithm [21], Boehm's knot insertion algorithm [8], (variants of) the Oslo algorithm [16], the standard two-term differentiation algorithm [11], Boehm's derivative algorithm [9], a classical integration algorithm [14], an Sablonniere's change of basis algorithm for converting from B-spline to piecewise Bezier form [24]. Marsden's identity [20] and the de Boor-Fix form of the dual functionals [12] also have natural local interpretations. These methods are not only local techniques, they are actually polynomial procedures. In this chapter we extend these local techniques to a wider class of curve schemes which includes the monomial, power, and Newton dual forms of a curve. We then apply these methods to obtain transformations relating the B-spline, Bezier, monomial, power, and Newton dual forms of polynomial and piecewise polynomial curves. The underlying theme of this chapter is unification: unification of algorithms, unification of derivations, and unification of representations. We shall demonstrate that many of the seemingly disparate algorithms mentioned above are really just different manifestations of the same underlying procedure, and we will show that many apparently different representations for curves can be incorporated within a single consistent framework. To achieve this unity, we shall employ two basic techniques: blossoming and homogenization. In §2.2 we introduce the notion of a progressive polynomial curve by relaxing the usual constraints on the knots of a B-spline curve. We go on to discuss the close connection between progressive curves and blossoms. We then apply blossoming to show that the local B-spline algorithms mentioned above extend readily to these progressive schemes. In §2.3 we use homogenizatio to further generalize the notion of a progressive curve. Since homogeneous polynomials play a fundamental role in our development, we also provide a 11
12
Algorithms for Progressive Curves
rigorous treatment of the related concepts of multilinear blossoms and vectorvalued knots. We then apply homogenization to unify differentiation and knot insertion algorithms for progressive curves. In §2.4 we show that the power, monomial, and Newton dual forms of a curve are progressive schemes, and we apply progressive procedures to derive transformations relating the B-spline, Bezier, monomial, power, and Newton dual forms of a curve. In §2.5 we give a short summary of our results and briefly discuss a few open questions. 2.2.
Algorithms
Since we are interested in extending various local algorithms for B-spline curves to other polynomial schemes, we begin in §2.2.1 by using the de Boor algorithm to characterize those polynomial schemes for which these procedures remain legitimate. We call such schemes progressive curves and the corresponding polynomial bases progressive bases. The main technique we shall adopt to analyze these progressive curves is the blossoming form of the dual functionals [21]. Because blossoming is a relatively new technique, this method, along with Ramshaw's algorithm for blossoming progressive curves, is reviewed briefly in §2.2.2. In §2.2.3 we use blossoming to extend Boehm's knot insertion algorithm to progressive schemes and in §2.2.4 we do the same for the Oslo algorithm. Sablonniere's algorithm for converting from B-spline to piecewise Bezier form is generalized to a transformation procedure between arbitrary progressive schemes in §2.2.5. Section 2.2.6 is devoted to deriving Marsden's identity and the de Boor-Fix form of the dual functionals for progressive bases from a blossoming point of view. 2.2.1. de Boor's Evaluation Algorithm. Let p(t) be a polynomial segment of a degree n B-spline curve over the interval [^,^+1]- The polynomial p(i) is influenced only by the knots / i , . . .,i 2 n- Denote the control points of p(i) by PQ,. . ,,Pn. Then we have the following well-known recursive evaluation algorithm for p(t). de Boor's Evaluation Algorithm [10] Tot
We illustrate this algorithm graphically for cubic curves in Fig. 2.1. Notice that this recursive evaluation algorithm computes a triangular array of values Pi(i), taking as input the control points PQ, • • • , Pn at the base of the triangle and calculating as output values of the polynomial p(i] at the apex of the
Knot Insertion and Deletion Algorithms
13
triangle. Each value p l ( t ] is an affine combination of the values pl~l(t} and pl~l(t] directly below it to its left and right.
FlG. 2.1.
The de Boor algorithm (unnorinalized} for a cubic curve with knots
*i,...,i6.
The functions along the edges of the graph are the coefficients on the righthand side of the recursion. However, to simplify this and subsequent diagrams, we have omitted the normalizing constants in the denominators. To retrieve these constants, notice that if two edges point into the same node, then their labels must sum to one; therefore their common denominator is given by the sum of their numerators. Notice too that labels along contiguous edges are identical. That is, whatever label enters a node also leaves the node in the same direction. We call this property the "in-out property" of the diagram. This in-out property is characteristic of those polynomial schemes which we plan to study. By the in-out property, all we need to fully label the diagram are the labels along the bottom level of the diagram—that is, the labels along the arrows exiting from the points P^, k = 0 , . . . , n. These labels are simple to remember: (t — t^} exits /\. to the left, and (t/t+n+-l — t) exits P^ to the right. We could take the de Boor algorithm as the definition of the curve p ( t ) . For a B-spline curve the knots must be nondecreasing, and no more than n adjacent knots may be identical; that is, tl < t3 whenever i < j and tl+n / tt. However, we could still run the de Boor algorithm even if the nondecreasing constraint is not satisfied provided that none of the denominators that appear in the algorithm vanish. A necessary and sufficient condition for these denominators to be nonzero is that tj+n ^ tz whenever 1 < j < i < n. Following Ramshaw [21], [22], [23], we call such a sequence t\,. . . ,^2n of real numbers a progressive sequence. Similarly, we call the curves p(t) defined by the de Boor algorithm applied to progressive sequences progressive curves. This local generalization of B-splines has been discussed before in terms of urn models [1] and the de Boor-Fix dual functional [2]. Here we shall examine these curves from the perspective of blossoming. Although progressive curves are no longer necessarily B-spline segments, nevertheless we shall still refer to the values £ 1 , . . . ,^2 7l as the knots.
14
Algorithms for Progressive Curves
From Fig. 2.1 it is easy to see that the recursive evaluation algorithm for the polynomials Pj(t) also satisfies the in-out property since the diagram for Pj(t) is a subdiagram of the diagram for p(t). Thus from the de Boor algorithm it follows by induction on A; that each polynomial p^(t] is a progressive curve of degree k with control points Pj_^.,...,P t and knots £ i _ f c + i , . . . , £;, ^i+71+l — ki • • • i *t+n-
It also follows easily by induction on k that the polynomials Pi(t) depend linearly on the control points Pi-k, • • . , Pi- In particular, there are degree n scalar-valued polynomials & o ( i ) , . . . , 6™(/), depending on the progressive sequence t\,..., t^n, such that
We shall show in §2.2.2 that for any progressive sequence ^ , . . . , t 2 7 i the polynomials &Q(£), ...,&"(£) form a basis for the degree n polynomials in t. Since they are the blending functions for progressive curves, we call these polynomials a progressive basis. By definition, the function &£(£) is the coefficient of /\. in the polynomial p(i] defined by the de Boor algorithm. Thus &j£(t) is the sum of all paths from Pk at the base of the triangle to p ^ ( t ) at the apex of the triangle (see Fig. 2.1). There are actually two distinct ways that we can compute the blending functions &£(£) recursively. The first is simply to set Pj = 6jk and run the de Boor algorithm. This technique computes V£(i) recursively by applying successive affine combinations. The second is to run the de Boor algorithm in reverse proceeding from the apex (starting with the value 1) to the base of the triangle rather than from the base to the apex (see Fig. 2.2). This reverse procedure produces the same basis functions because it too computes b^(i] as the sum of all paths between the apex and the fcth position at the base of the triangle. If we denote the dependency of b™(i] on the 2m parameters tn-77i+i, • • • , tn+m by b™[tn-m+i,..., tn+m](t), then the latter recursion formula can be written as
where 77?, = 0,..., n — 1 (see Fig. 2.2). It follows by induction on m that b^(t] is a polynomial of degree n that depends only on the knots tk,..., t^+n+i 5 and that the polynomials &£(£), k = 0 , . . . , 7 i , sum to one. Equation (2.2) is a local form of the Cox-de Boor-Mansfield recurrence for computing the Bspline basis functions. Notice that in this recurrence the combinations are no longer affine. 2.2.2. Ramshaw's Blossoming Algorithm. Ramshaw's blossoming algorithm uses the de Boor algorithm to construct the unique symmetric inulti-
Knot Insertion and Deletion Algorithms
FIG. 2.2. The Cox-de Boor-Mansfield progressive basis functions.
15
recurrence (unnormalized) for the cubic
affine polynomial whose diagonal is the original progressive polynomial. This multivariate polynomial is called the blossom [21], [27] or polar form [15], [23] of the progressive polynomial and can be used to generate dual functionals for the corresponding progressive basis. These dual functionals will be the main tool in our subsequent derivations of algorithms for progressive curves. To discuss blossoming and its ramifications, we first need to introduce the notion of symmetric multiaffine polynomials. A symmetric multiaffine polynomial in n variables is a multivariate polynomial B(u\,..., un) which satisfies the following two conditions: Symmetry
The multiaffine property is equivalent to the fact that each variable U{ appears to, at most, the first power. Hence the monomials un • • -u^ form a basis for the multiaffine polynomials. Symmetry requires that the coefficient of u n '' 'uik be identical to the coefficient of Uj\ • • -Uj^. Therefore we can write any symmetric multiaffine polynomial in the form
where the sum in braces is taken over all subsets of order k of the integers ( 1 , . . . , n}. Thus the symmetric sums of order k
form a basis for the symmetric multiaffine polynomials in n variables.
16
Algorithms for Progressive Curves
There is a simple recursion formula for evaluating symmetric multiaffine polynomials which, as we shall see, is closely related to the de Boor algorithm for evaluating points on progressive curves. By (2.3) any symmetric multiaffine polynomial in n variables is completely determined by n -\- 1 coefficients CQ, ..., cn. It follows that B(UI , . . . , un) is completely determined by specifying it at n + 1 independent sets of parameter values. (Because of the multiaffine and symmetry properties, not all sets of parameter values are independent.) Fix a progressive sequence / i , . . .,/2n- Following Ramshaw [21], [22], [23], we call the values B(t\,..., tn),..., B(tn+i,..., tin) progressive values. Using the following algorithm, we can easily compute any value B(u\,..., un) recursively once we know the n + 1 progressive values -0(£i,..., tn),..., B(tn+i, • • •, hn}Recursive Evaluation Polynomials
Algorithm
for
Symmetric
Multiaffine
Let
where r = Then
We illustrate this algorithm graphically for functions of three variables in Fig. 2.3. Notice that as in the de Boor algorithm, this recursive evaluation algorithm computes a triangular array of values Bl(u\,...,uT\ taking as input the progressive values B(t\,... , / n ) , . . . , B(tn+i,..., £ 2 ?i) at the base of the triangle and calculating as output values of the symmetric multiaffine polynomial B(u\,...,un} at the apex of the triangle. Again each value B(u\,..., w r ,^-+i,...,^-+ n _ r ) is an affine combination of the values 5 ( t i i , . . . , t i r _ i , t t , . - - , * t + n - r ) and B(u\,..., u r _ i , 2 t - + i , . . . ,ti+n+i-T) directly below it to its left and right. Indeed, this algorithm is virtually the same as the de Boor algorithm except that ur is substituted for t on the right-hand side in the rth level of the recursion. To derive this algorithm, simply notice that we can write ur as. an affine combination of t{ and ti+n+i-r by setting
Knot Insertion and Deletion Algorithms
17
FIG. 2.3. Recursive evaluation algorithm (unnormalized) for symmetric multiaffine polynomials in three variables. Since B is symmetric and multiaffine, it follows that
which is precisely the rth level of the recursive evaluation algorithm. The diagonal B(t,...,i) of a symmetric multiaffine polynomial in n variables is a univariate polynomial of degree n. Moreover, if we make the substitution ur = t, r = 1,. . . , n, then the recursive evaluation algorithm for the symmetric multiaffine polynomial B(u\,... ,u n ) specializes to a recursive evaluation algorithm for the diagonal univariate polynomial B(t, . . . , £ ) . Notice that this recursive evaluation algorithm for B(t,..., t) is precisely the de Boor algorithm for the progressive curve with knots t\,. . . , t-2n and control points B(ti,..., tn),..., B(tn+i,..., *2n)- Thus
There is a 1-1 correspondence between univariate polynomials of degree n and symmetric multiaffine polynomials in n variables. To find the univariate degree n polynomial p ( t ) corresponding to the symmetric multiaffine polynomial B(u\,.. .,un~), simply take its diagonal B(t,...,t). Conversely, given a univariate polynomial of degree 7i, there exists a unique symmetric multiaffine polynomial in n variables whose diagonal reduces to the original polynomial.
18
Algorithms for Progressive Curves
The unique symmetric multiaffine polynomial corresponding to p(t] is denoted by BP(UI, . . . , u^ and is called the blossom or polar form of p(i). One way to find BP(UI, ..., un) is to write p(i] in terms of the monomial basis, blossom the monomial basis, and then apply linearity. Since the blossom of the monomial tk (considered as a polynomial of degree n) is simply Sk(u\,..., ti n )/Q), we have the correspondence
Given a progressive curve p ( t ] , we would like to find its blossom BP(UI, ..., Un) directly from the de Boor algorithm without converting it to monomial form. If p(t] = B(t,...,£), then we know how to recover B(UI, ..., un): simply replace t by ur on the rth level of the recursion. This observation suggests the following general algorithm. Ramshaw's Blossoming Algorithm [21] The blossom Bp(u\,..., un] of a progressive curve p ( t ) can be computed recursively by substituting ur for t on the right-hand side in the rth level of the de Boor algorithm. This blossoming algorithm is virtually identical to the recursive evaluation algorithm for the symmetric multiaffine polynomial Bp(u\,..., un) except that it starts with the control points Po,...,Pn rather than with the progressive values Bp(t\,... , t n ) , . . . , Bp(tn+i,... ,*2 n ) (compare Figs. 2.1 and 2.3). Thus to prove that the algorithm is valid, we need only show that for progressive curves the control points are given by the blossom evaluated at consecutive knots. We shall now give a simple proof of this result. Let p ( t ) be a polynomial with blossom BP(UI, ..., un}. Then
The first equality is just the diagonal property of the blossom; the second equality is simply (2.5) applied to Bp. From (2.7) every degree n polynomial p(i] can be written as a linear combination of &g(t), • • -^n(0- Therefore the n + 1 degree n polynomials b^it),..., b^(t) form a basis for all polynomials of degree n. Moreover, the coefficient of p(t] with respect to b^(t} is precisely Bp(tk+i, • • • , tk+n)• Therefore we have shown that if PQ,..., Pn are the control points of p(t] relative to the progressive basis &Q I (£), • • - ? & n ( 0 > ^ nen
That is, for progressive curves the control points are given by the blossom evaluated at consecutive knots. These formulas are called the blossoming form of the dual functionals for the progressive basis & o ( £ ) , . . .,6^(t) because they provide the coefficients of p(i] relative to this basis. These dual functionals will be the main tool in much of our subsequent analysis.
Knot Insertion and Deletion Algorithms
19
2.2.3. Boehm's Knot Insertion Algorithm. Given the control points Po,...,Pn of a polynomial curve p(t) relative to one progressive sequence t\,..., t-2ni we would like a change of basis technique for finding the control points Qo,...,Qn °f p(t) relative to the some other progressive sequence t i i , . . . , ti27i,. Boehm's knot insertion algorithm finds the control points Qoi---iQn and Ro,...,Rn relative to the progressive sequences / i , . . . , £ n , ti, £ n _ | _ i , . . . , t'2n-\ and £21 • • • I ' t n i ui tn+\ •, • • • •, tin- Repeated application of Boehm's algorithm can be used to find the control points of p(i] relative to any arbitrary progressive sequence t i i , . . ., u-2nBoehm's algorithm is essentially one stage of the de Boor algorithm. By
But these new points Qi and Rl are precisely the points on the first level of the recursive evaluation algorithm for Bp(u,..., ti) relative to the control points Bp(t\,..., tn),..., Bp(tn+i,..., t-2n) which, as we have seen, are exactly the points on the first level of the de Boor algorithm for p ( u ] relative to the control points PQ,...,PH (see Figs. 2.1 and 2.3). Using the first level of the de Boor algorithm for p ( u ] relative to the progressive sequence t \ - > • • • , tin to find the new control points for p(i] relative to the progressive sequences ^ , . . . , / n , u, tn+i,..., t>2n-i and t 2 , • • • , tn, u,tn+},.. ., t2n is Boehm's knot insertion algorithm. Thus to insert one new knot using Boehm's algorithm requires the calculation of n affine combinations. Hence using Boehm's knot insertion algorithm to convert from control points relative to t\,. . ., t-2n to control points relative to u\,. . . , ti2n would, in general, require the computation of 2n2 affine combinations, although, as we shall see in the following section, some of these combinations are superfluous. To insert the same knot u any number of times d < n, notice that by (2.8) the new control points Qo,...,Qn relative to the knot sequence are
givGT1 ^
Thus we can simply run the de Boor algorithm up to the dth level and read the new control points off the left edge and the
20
Algorithms for Progressive Curves
knots one at a time would require the calculation of dn affine combinations. Boehm's procedure for inserting multiple knots is more efficient because it requires only affine combinations. Notice that to perform n-fold knot insertion at u, we simply run the entire de Boor algorithm and read the new control points off the edges of the diagram. Thus n-fold knot insertion is equivalent to the de Boor algorithm. Originally Boehm developed this knot insertion algorithm only for B-spline curves [8], but here we have extended it to arbitrary progressive schemes. Boehm's original approach was based upon divided differences rather than blossoming. A derivation of this knot insertion algorithm for B-spline curves using blossoming, similar to the derivation provided here for progressive schemes, was first given by Seidel [26]. 2.2.4. The Oslo Algorithm. Boehm's algorithm inserts one new knot at a time; the Oslo algorithm inserts many new knots simultaneously. The Oslo algorithm has many variations. The original paper on the Oslo algorithm [16] gives two basic variations: one for computing the control points directly and one for calculating the transformation matrix. Later a faster version of the Oslo algorithm was provided by Lyche and Morken [19], and subsequently another fast version was derived by Goldman [18]. The basic step of control point variants of the Oslo algorithm is to find a single new control point. Since each control point can be expressed as the blossom evaluated at the appropriate knots, the new control points can be found by applying the blossoming algorithm. The basic step of the Oslo algorithm is Ramshaw's blossoming algorithm [6, Chap. 4, this volume], [18]. Because the blossoming algorithm does not require a nondecreasing knot sequence, the Oslo algorithm extends to progressive curves. Indeed let p(i) be a progressive curve defined relative to the progressive sequence t\,..., t^ni and let u i , . . . , U 2 n be a new progressive sequence. To find the control points of p(t) relative to the knot sequence u\,..., U2 7l , we need only run the blossoming algorithm to compute Bp(ui+i,...,wt-+n), i = 0 , . . . ,n. Note that in addition to removing the nondecreasing constraint from the knots, we have also modified the Oslo algorithm slightly to apply it to polynomial rather than to piecewise polynomial curves. Since the blossoming algorithm uses n(n + l)/2 affine combinations, applying the original version of the Oslo algorithm to progressive curves requires n(n + l) 2 /2 affine combinations. However, this number can be decreased dramatically by using other variants of the Oslo algorithm. Goldman's variant is particularly well suited to progressive curves because it is a local polynomial procedure [18]. The Goldman variant incorporates the insights of Lyche and Morken [19] together with the symmetry of the blossom to produce an algorithm which invokes the blossoming algorithm only twice rather than n + 1
Knot Insertion and Deletion Algorithms
21
times. Goldman's variant was originally applied to B-spline curves; here we shall modify this variant slightly to apply it more readily to progressive curves. Given the control points PQ, • • • , Pn °f a polynomial curve p ( t ) relative to one progressive sequence / } , . . . , £ 2n , this algorithm first finds the control points Qo,..., Qn or RQ, ..., Rn oip(i] relative to the progressive sequences ti,..., i n , un+i,..., u-2n or wi 5 • • • 7 un, tn+i 5 • • • 5 ^2ni respectively. Thus by running these two procedures in sequence, we can convert from control points relative to the progressive sequence t\,..., t^n to control points relative to the progressive sequence u\,..., u^n- Indeed by (2.8),
Thus the control points Qi are precisely the points on the left edge of the recursive evaluation algorithm for Bp(un+i,..., u^n) relative to the progressive values Bp(ti,... , t n ) , . . . , Bp(tn+i,..., tin], and these points are exactly the points on the left edge of Ramshaw's blossoming algorithm for p ( t ) evaluated at the knots t i n + i , . . . , u
22
Algorithms for Progressive Curves
Original Control Points FIG. 2.4(a). Goldman's version of the Oslo algorithm (unnormalized) for inserting the knots MI, 1/2,^3 into a cubic curve. Notice that the order in which the new knots appear in different levels of the algorithm is critical both here and in Fig. 2.4(b).
Original Control Points FIG. 2.4(b). Goldman's version of the Oslo algorithm (unnormalized) for inserting the knots ^4,1*5, UQ into a cubic curve.
Knot Insertion and Deletion Algorithms
23
would, in general, require In2 affine combinations. However, for inserting new knots into B-spline curves, Boehm's algorithm and Goldman's version of the Oslo algorithm are known to require the same amount of computation [18]. The difference between progressive polynomials and B-splines is that for B-spline curves additional adjacent curve segments must be created which join together smoothly. These adjacent segments require the calculation of the additional control points
These points are calculated by Boehm's algorithm, but not by the progressive version of the Oslo algorithm. When this extra computation is performed for B-splines, Goldman's version of the Oslo algorithm ends up doing the same amount of work as Boehm's algorithm. Alternatively, we can modify the progressive version of Boehm's algorithm so that it does not perform these extraneous calculations; then Boehm's algorithm and the Oslo algorithm would again require the same amount of computation. 2.2.5.
Sablonniere's
Change
of
Basis
Algorithm.
Two
sequences t\,- • - i t ^ n and u\,...,U2n may both be progressive, and yet the sequences t\,..., tn, u n + 1 ,..., u 2n , or w 1 ? . . . , un,tn+i,. . ., t2n, or t i , . . . ,i n _;, tii, • • • lUi+jitn+i, • • ••> t-2n-j need not be progressive. If any one of these intermediate sequences is not progressive, we may not be able to use Boehm's algorithm or the Oslo algorithm to convert from control points relative to t\,..., t'2n to control points relative to u\,..., u-2n without encountering singularities in the computation. While it is possible to avoid these singularities, either by rearranging the order in which the knots are inserted or by introducing other additional intermediate knots, special provisions would need to be made for these particular cases. One way to avoid these potential singularities is to use the original version of the Oslo algorithm [16] which performs all its calculations directly from the original control points. Unfortunately, this version of the Oslo algorithm is relatively slow. An alternative method is to apply Sablonniere's change of basis algorithm. The original version of Sablonniere's algorithm converted a curve from B-spline to piecewise Bezier form [24]. Here we shall generalize this algorithm to transform between any two arbitrary progressive schemes. Ramshaw's blossoming algorithm can be run n + 1 times to compute the new control points from the original control points This direct approach would require us to compute a total of n(n + l) 2 /2 affine combinations. Sablonniere's algorithm is just a way to organize all these
24
Algorithms for Progressive Curves
computations to reduce the total amount of calculation by taking advantage of certain overlaps in these blossoming computations. Sablonniere's algorithm computes a tetrahedral array. If we run the direct version of Ramshaw's blossoming algorithm to compute the control points Bp(ui+i,..., w;+n), we need to calculate n + 1 triangular arrays. We can stack these arrays one atop the other to form a triangular prism with no connections between the layers. Sablonniere's algorithm improves upon this calculation by stacking the triangles into a triangular pyramid with links between the various layers. Let B be an arbitrary symmetric multiaffine function. The general Sablonniere algorithm computes n + 1 arbitrary progressive values B(u\,..., w n ) , . . . , B(un+\,..., u~2n) given the fixed progressive values B(t\,..., £ n ) , . . . , B(tn+i,..., t-2n}Sablonniere's Change of Basis Algorithm [24], Let
where s = 1,..., n and i = s , . . . , n. Furthermore, let
where s = Then
We illustrate Sablonniere's tetrahedral algorithm in Fig. 2.5. The general schematic for curves of arbitrary degree is presented in Fig. 2.5(a); the cubic case is illustrated in Fig. 2.5(b). It will help to keep these figures in view as we explain the workings of the algorithm. The elements
where s = 0 , . . . , n, r = 0 , . . . , n — s, i — r + 5 , . . . , n, form a tetrahedral array. The superscript s keeps track of the height in the tetrahedron, the
Knot Insertion and Deletion Algorithms
25
superscript r keeps track of the row in the triangle, and the subscript i keeps track of the entry in the row. The original progressive values B(ti,..., £ n ) , . . . , B(tn+i,... ,^2n) are input along one edge of the base of the tetrahedron and the new progressive values B(UI , . . . , un],..., B(un+i,..., u-2n) emerge along the skew edge of the tetrahedron.
Input Control Points FlG. 2.5(a). Sablonmere's tetrahedral change of basts algorithm, viewed from above.
Sablonniere's algorithm is organized into n + 1 triangles of depth n. Each triangle computes one new control point by applying the recursive evaluation algorithm for symmetric multiaffine polynomials. Along the triangular base of the tetrahedron, we run the recursive evaluation algorithm for B(un,..., u-[). Along the lateral front face—that is, along the triangular side of the tetrahedron connected to the edge with the original progressive values—we run the recursive evaluation algorithm for B(un+\,. ..,u-2n}- The remainder of the triangles are folded over with the lower part running up the front face of the tetrahedron and the upper part lying parallel to the base. Running up the tetrahedron to height s and then along the triangle at that height parallel to the base, we compute B(un+\,. . ., w n + s , un,..., w s +i) = B(us+i,.. ., us+n). Notice that the parameters un+-[,. . . , un+s run along the lateral side of the tetrahedron while the parameters t i n , . . . , w s+ i run along the triangle parallel to the base. This structure allows us to take advantage of overlapping computations by reusing again and again the computations running up the lateral side of the tetrahedron. The new progressive values B(u\,..., w n ) , . . . , B(un+\,..., U2n) emerge at the apexes of the triangles parallel to the base—that is, along the edge of the tetrahedron which is skew to the edge with the input control points. Given some fixed progressive values B(t\,... , £ n ) , . . . , B(tn+i,..., £271)5 Sablonniere's algorithm computes arbitrary progressive values B(u-[,..., nu) , . . . , B(un+i,..., u^n). Setting B = Bp, this procedure is precisely what we require to convert from control points of p(i] relative to ^ i 5 - - -5^271 to control points of p(t] relative to u-\,..., u^n- That is, we simply run Sablonmere's algorithm with Pi — B(t{+i,..., t;+ n ), i = 0 , . . . , 7 i , as
26
Algorithms for Progressive Curves
Original Control Points
Original Control Points FIG. 2.5(b). Sablonniere's algorithm (unnormalized) for cubic curves. The first triangle is the base of the tetrahedron. The next three smaller triangles are the cross sections of the tetrahedron parallel to the base. The final triangle is the lateral front face of the tetrahedron. Notice that the four levels of the lateral triangle are just the bottom levels of the four cross sections parallel to the base. Proceeding up this lateral triangle and then along the triangles parallel to the base yields the desired new control points.
Knot Insertion and Deletion Algorithms
27
the input to the algorithm. Sablonniere's algorithm is generally more expensive than either the Oslo algorithm or Boehm's knot insertion algorithm. In a tetrahedron of height n there are n + 1 triangles parallel to the base and the fcth triangle from the top has k(k + l)/2 nodes. Since in the base triangle we do not need to compute the points along the bottom row, we actually need to calculate a total of
affine combinations. Therefore to calculate the n + 1 new control points for a progressive curve of degree 7i, Sablonniere's algorithm computes O(n3) affine combinations while both Boehm's knot insertion algorithm and the Oslo algorithm compute only 0(n2} affine combinations. Sablonniere's algorithm has fallen out of favor because, in general, it is much slower than other knot insertion procedures. We can speed up Sablonniere's algorithm by considering paths not only in the triangles parallel to the base but also in the triangles parallel to the lateral front face. Along triangles parallel to the base at height s, we substitute the knots t t n , . . . , us+i in decreasing order; similarly, along triangles parallel to the front face at depth r, we substitute the knots un+i,..., u^n-r in increasing order. Now any path in the tetrahedron leading from the original control points to a node on the skew edge will correctly compute a new control point. To optimize Sablonniere's algorithm, we need to choose the collection of paths through the tetrahedron which require the fewest computations—that is, the paths with the most overlap. However, even if we optimize Sablonniere's algorithm in this manner, we will always need to compute all the values on the base and the front face of the tetrahedron since these are the only paths leading to the points B(u\,...,un] and B(un+\,..., u^n}. Thus more than just two triangles need to be computed no matter how cleverly we organize the computation. Therefore, for progressive curves, Sablonniere's algorithm is always slower and less efficient than the Oslo algorithm. Nevertheless, Sablonniere's algorithm does have two advantages over other knot insertion techniques. First, as we have already seen, although for Bspline curves intermediate sequences like t\,... ,t n , u\,...,U{, tn+\,...,t-2n-i are always progressive, for general progressive polynomials this is not always the case. Thus Sablonniere's algorithm may be the simplest and best alternative for avoiding singularities when these intermediate sequences are not progressive. Second, in Sablonniere's algorithm all the new control points are computed directly from the original array of control points rather than from an intermediate sequence as in the other knot insertion schemes. This
28
Algorithms for Progressive Curves
procedure may help to guard against the propagation of numerical errors in the calculations. This property is important because, unlike the case of B-splines, in the general progressive case knot insertion procedures are not particularly stable; the affine combinations in these algorithms are no longer necessarily convex combinations. (The original Oslo algorithm also computes all the new control points directly from the original control points [16], but it pays for this property with much reduced speed. The original Oslo algorithm could have been substantially improved by applying a Sablonniere type tetrahedral technique to take advantage of overlapping computations. Apparently this was never done, but, in any event, this would not lead to a scheme as fast as some other variants of the Oslo algorithm [5, Chap. 3, this volume], [6, Chap. 4, this volume], [18].) In the progressive case for n = 2,3,4, Sablonniere's algorithm actually requires fewer computations than Boehm's knot insertion algorithm. The difference between the progressive polynomial case and the B-spline case is again that for B-splines additional adjacent curve segments must be created which join together smoothly. When this extra computation is performed for B-spline curves, Sablonniere's algorithm again ends up doing more work than Boehm's knot insertion algorithm even for n = 2,3,4. For further discussion of Sablonniere's algorithm, see Chapter 4 of this volume [6]. 2.2.6. Marsden's Identity and the de Boor-Fix Form of the Dual Functionals. We end our discussion of algorithms for progressive curves by deriving an alternative form of the dual functionals for progressive curves along with a closely related identity. Again our main tool will be blossoming. Given a progressive sequence ^ i , . . . , ^ 2 n ? we introduce the degree n polynomials
Expanding ^i(t) in terms of the power basis, we obtain
where Sk(ti+i,... ,^+ n ) Therefore
are
the symmetric sums of order k defined in (2.4).
THEOREM 2.2.1 (MARSDEN'S IDENTITY [20]).
Proof. Fix t and let V(x) = (x - t)n. By (2.8), to find the coefficients of ty(x) relative to the basis b^x),.. .,&"(#), simply blossom ty(x) and evaluate it at consecutive knots. But by construction By(ti+i,..., ^-+ n )
Knot Insertion and Deletion Algorithms
29
THEOREM 2.2.2 (DE Boon-Fix DUAL FUNCTIONALS [12]).
Proof. First notice that the right-hand side is a constant independent of t since its derivative is zero. Now by Taylor's theorem
Blossoming both sides, evaluating at the knots, and applying (2.9), we obtain
The functionals AQ, . . . , \n defined by the formulas
are called the de Boor Fix form of the dual functionals for the progressive basis &£(*),. . . , & £ ( * ) because by (2.8) and (2.10)
Since these dual functionals are defined in terms of the polynomial basis ^ 0 ( 2 ) , . . . , ^n(t), we shall often abuse nomenclature and call the polynomial basis $o(0' • • • 5 ^n(t) the dual basis to the progressive basis & o ( i ) , . . ., b\\(t}. Since the dual basis is a polynomial basis, there exist normalizing constants CKO, • • • i an such that
The normalized dual basis ao^o(0' • • -ian^7i(t} is sometimes called the Polya basis and has been studied extensively [1], [2], [4]. By introducing the constants «o, • • • , «7i into the denominators of the dual functionals A 0 , . . . , A n , we can take the Polya basis a^o(t),..., antyn(t) as the dual basis to the corresponding progressive basis ^ o ( ^ ) , . . . , b™(t). 2.3.
Homogenization
We have shown that there is a rich collection of algorithms for progressive curves. The purpose of this section is to enlarge the category of progressive curves by introducing the notions of homogeneous, affine, and vector-valued
30
Algorithms for Progressive Curves
knots. We shall also develop differentiation formulas for progressive curves and integration algorithms for progressive bases. Our guiding principle here is the technique of homogenization. We begin in §2.3.1 by introducing a homogeneous version of the de Boor algorithm. This variant allows us to extend the notion of a progressive sequence to knots with homogeneous values. To study ordinary progressive curves, we applied the multiaffine blossom of a polynomial. Since the combinations in the homogeneous de Boor recurrence are linear rather than affine, we can no longer employ the multiaffine blossom; rather we shall need to use the multilinear blossom [21]. The multilinear blossom and its properties are introduced in §2.3.2. We then use the multilinear blossom to extend the algorithms for progressive curves derived in §2.2 to progressive curves with homogeneous knots. In §2.3.3 we apply the multilinear blossom to develop differentiation algorithms for progressive curves, and we show how to perform differentiation by applying knot insertion procedures with vector-valued knots. For B-spline curves, derivatives of adjacent segments agree at the joints. In §2.3.4 we show how to extend this result on matching derivatives to homogeneous progressive curves. We also discuss what it means geometrically when the derivatives of two progressive curves agree at homogeneous or vectorvalued knots. Integration procedures for progressive bases are developed in §2.3.5. Here we apply the differentiation results of §2.3.3 to extend to progressive bases some well-known integration formulas for B-splines [14]. 2.3.1. The Homogeneous de Boor Algorithm. There is a simple correspondence between homogeneous bivariate polynomials and ordinary univariate polynomials. Given a univariate polynomial of degree n, we can homogenize it by multiplying each term tk by wn~k. Similarly, given a homogeneous bivariate polynomial of degree n, we can dehomogenize it by setting w = 1. Often we shall not carefully distinguish between a univariate polynomial ^ cktk and the corresponding homogeneous polynomial Y^, Cktkwn~k since these two polynomials are simply two variants of the same basic object. Let p(i] be a progressive curve with knots t\,..., t<2n and control points PQ,...,Pn. To compute the corresponding homogeneous polynomial p(t,w) recursively, we must homogenize the de Boor algorithm. In practice, this means replacing the numerators ±(£ — t^} by db(/ — t^w) on the right-hand side of the recursion formula. But if we can extend the parameters of the polynomial to be homogeneous, then why not the knots as well? If we are going to extend the standard de Boor algorithm to homogeneous knot sequences (£1, w\),..., (tin, w-2n}, then the homogeneous version of the de Boor algorithm must reduce to the ordinary de Boor algorithm when t/7 = 1/7^. = ! , & = ! , . . . , In. One way to achieve this goal
Knot Insertion and Deletion Algorithms
31
is to homogenize the coefficients on the right-hand side of the de Boor algorithm by homogenizing the numerator and the denominator independently—that is, by making the substitutions
This construction leads to the following homogeneous variant of the de Boor algorithm. The Homogeneous de Boor Algorithm. Let
where r = Then We illustrate the homogeneous de Boor algorithm for cubic curves in Fig. 2.6. As usual we have omitted the denominators from the diagram even though the labels entering a node no longer sum to one. Thus Fig. 2.6 and subsequent figures are unnormalized differently from the previous figures. Nevertheless, the denominators can still be retrieved by summing the numerators, discarding the terms containing /, and affixing the appropriate homogenizing indices to w.
FIG. 2.6. The homogeneous de Boor algorithm (unnormalized] for a homogeneous cubic curve with homogeneous knots (t\, w\),..., (t$, w&}-
For the homogeneous de Boor algorithm to work, none of the denominators on the right-hand side of the algorithm are allowed to vanish. In analogy with
32
Algorithms for Progressive Curves
the univariate case we call a sequence (£1, w\),..., (/2n, W2n) progressive if none of the denominators in this recursive evaluation algorithm are zero. It follows that (£1, w\),..., (/2n? W2n) 'IS a progressive sequence if and only if del
The curves p(t, w] defined by the homogeneous de Boor algorithm are again called progressive curves. Notice that the progressive curve p(t, 1) is a univariate polynomial with homogeneous knots. Thus we have actually succeeded in extending the notion of homogeneous knots to univariate polynomials. We shall be particularly interested in two kinds of homogeneous parameters: affine and vector-valued. The homogeneous parameter (t,w) is called affine if w — 1 and (£, w) is said to be vector-valued if w = 0. Similarly, a knot (tk->wk) is called affine if w^ = 1 and vector-valued if w^ = 0. (These terms are adopted because the values (i, 1) correspond to points in a one-dimensional affine space and (£,0) correspond to vectors in the same space.) Affine parameters and affine knots are what appear in the standard version of the de Boor algorithm. In general, the homogeneous de Boor algorithm is somewhat more complicated than the standard de Boor algorithm, and little is gained by considering the most general case. However, when some of the knots are affine and the remainder are vector-valued, certain terms vanish and there can be a good deal of simplification in the algorithm. We shall return to this observation in more detail in §2.4 where we shall look at some particularly interesting examples. 2.3.2. The Multilinear Blossom. Given a univariate polynomial of degree n, we have seen that there exists a unique symmetric multiaffine polynomial in n affine variables whose diagonal reduces to the original polynomial. Similarly, given a bivariate homogeneous polynomial of degree 71, there exists a unique symmetric multilinear polynomial in n homogeneous variables whose diagonal reduces to the original homogeneous polynomial. The multilinear function corresponding to the homogeneous polynomial p(t, w) is denoted by BP[(UI,VI), ..., (w n , vn}} and is called the multilinear blossom of p(t, w\ For homogeneous polynomials expressed in monomial form, the correspondence between p(t, w) and BP[(U\,VI), ..., (w n , vn]] is given by
where the sum in brackets is taken over all subsets of order k of the integers {!,...,»}. Like ordinary and homogeneous polynomials, the multiaffine and multilinear blossoms are intimately related. Given the multiaffine blossom, we can
Knot Insertion and Deletion Algorithms
33
linearize it by multiplying each term Ui\ — -Uik by f^.+i • • -Vin. Similarly, given the multilinear blossom, we can dehomogenize it by setting Vk — 1, k — 1,. . ., ri. Moreover, by examining the monomial basis it is easy to see that blossoming and homogenization commute (see Fig. 2.7).
FlG. 2.7. Blossoming and homogemzation commute.
Just like symmetric multiaffine polynomials, symmetric multilinear polynomials can be computed from a recurrence. In general, we have the following identity:
If B is a symmetric multilinear polynomial, then it follows from this identity that
where
Thus, using the following algorithm, we can compute an arbitrary value of B[(UI, t > i ) , . . . , (un, Vn)] recursively once we know the n + 1 values B[(ti, t u i ) , . . . , (tn, wn)},..., B[(tn+i, wn+i),..., (*2n, w2n)} for any progressive sequence (i 1? w - [ ) , . . . , (t 2n , w^n)-
34
Algorithms for Progressive Curves
Recursive Evaluation Polynomials.
Algorithm
for
Symmetric
Multilinear
Let
Then
We illustrate this algorithm graphically for symmetric multilinear functions of three variables in Fig. 2.8. Notice that this algorithm can be obtained from the recursive evaluation algorithm for symmetric multiaffine polynomials by independently homogenizing the numerator and denominator (compare Fig. 2.8 to Fig. 2.3). Moreover, if we set (ur,vr) = (t,w), r = l , . . . , ? i , then this algorithm reduces to the homogeneous de Boor algorithm for the homogeneous polynomial p(t,w) = B[(t, i t > ) , . . . , (£, w)] relative to the homogeneous knots is a symmetric multilinear polynomial in the variables (tii, ^i), • • • , (UTI-, vn), then B[(u\^ 1),..., (un, 1)] is a symmetric multiaffine polynomial in the variables u\,..., un. Indeed by dehomogenization, the m u l t ili n ea r and m ulti aff ine bl oss om s are relat ed b y
Thus if we know the multilinear blossom, we know the multiaffine blossom. Conversely, if we know the multiaffine blossom, then we can use the recursive evaluation algorithm to compute the multilinear blossom. Thus it follows from the recursive evaluation algorithm that each multiaffine blossom has a unique multilinear extension. Therefore if we know one blossom, we know the other. The main tool for studying ordinary progressive curves is the blossoming form of the dual functionals. An analogous result holds for homogeneous progressive curves. Let (t\,w\),... ,(t 2n , w^n] be a progressive sequence, and
Knot Insertion and Deletion Algorithms
35
FIG. 2.8. Recursive evaluation algorithm (unnormalizecT) for symmetric jjiultilinear polynomials in three variables. To save space, the symbol "" " is used to denote homogeneous parameters; thus, for example, Uj = ( W J , D J ) .
let p(t, w) be a progressive curve with control points PQ, ..., Pn relative to these homogeneous knots. Then by much the same arguments we used in §2.2 for the multiafrme case, we can show that
That is, the control points are given by the multilinear blossom evaluated at consecutive knots. (For further details, see Ramshaw [21].) Thus once again we have a blossoming form for the dual functionals, this time for the progressive basis corresponding to the homogeneous knots (/i, w\),..., (£2™, W2n)Now all the results derived in §2.2 for ordinary progressive curves extend quite readily to homogeneous progressive curves. In particular, Ramshaw's blossoming algorithm, Boehm's knot insertion algorithm, the Oslo algorithm, Sablonniere's change of basis algorithm, Marsden's identity, and the de BoorFix form of the dual functionals all remain valid. The proofs are much the same. Simply replace affine parameters by homogeneous parameters, affine knots by homogeneous knots, and the multiafrme blossom by the multilinear blossom. Notice that in the homogeneous version of Marsden's identity and the de Boor-Fix dual functionals, the dual basis functions ^o(t),..., fy n (t] must be homogenized with respect to both the parameter t and the knots t^.. Thus the Polya basis
must be replaced by the homogeneous Polya basis
36
Algorithms for Progressive Curves
With this substitution Marsden's identity becomes
where &o(#, w x ) , . . . , b™(x, wx} are the homogeneous progressive basis functions. Similarly, for the de Boor-Fix dual functionals, we have
where the differentiation on the right-hand side is with respect to t. 2.3.3. Differentiation by Knot Insertion. The multilinear blossom can be used to compute the derivatives of a progressive curve by evaluating the blossom at vector-valued knots. This observation, originally due to Ramshaw [21], will allow us to perform differentiation by applying knot insertion procedures. Before we develop this idea further, we introduce some notation. When applying the multilinear blossom, we shall need to use simple identities of the form As is standard practice, we shall often abuse notation and simply write t to represent the affine parameter (/, 1). (We shall also use t to denote the scalar t as in the left-hand side of (2.13); the meaning we assign to t should be clear from the context.) However, we need some notation for the vector (/i, 0). We shall adopt the notation for the unit vector. Thus we have the identity
The following results are given in Ramshaw [21]; we include them here for completeness. PROPOSITION 2.3.1. Let B[(ti, w>i),.. .,(t n ,w n )] be a symmetric multilinear function. Then
where 8 appears as an argument k times in the coefficient of hk. Proof. By induction on n. Clearly this result is true for n = 1. Now suppose it is true for n — 1. Let
Knot Insertion and Deletion Algorithms
37
Then C is a symmetric multilinear function in n — 1 variables. Therefore by the inductive hypothesis
COROLLARY 2.3.1. Letp(i] be a degree n polynomial curve with multilinear blossom Bp. Then
where 8 appears as an argument k times on the right-hand side of this equation. Proof. By Taylor's theorem
But by the preceding proposition we also have
Equating the coefficients of these two polynomials in /i, we obtain
Now the result follows by multiplying both sides of this equation by k\. D Corollary 2.3.1 has an interesting interpretation in terms of the diagram for the homogeneous de Boor algorithm (Fig. 2.6). Suppose we want to compute the &th derivative of a progressive curve p(£). By Corollary 2.3.1 we need to compute Bp(8,..., <*>, £ , . . . , £). But by Ramshaw's blossoming algorithm, we can compute B p ( f i , . . . , <5, t , . . .t) by substituting 6 = (1,0) for (£, w) in the first k levels of the de Boor algorithm. If we perform this substitution, then
Thus we can find p^(t) by differentiating the labels on the first A,1 levels of the homogeneous de Boor algorithm, leaving the remainder of the labels intact,
38
Algorithms for Progressive Curves
and multiplying the result by the normalizing constant n\/(n — k}\. Since the multilinear blossom Bp(8,..., < * > , £ , . . . , / ) is symmetric, we could differentiate the labels on any arbitrary k levels of the diagram and this algorithm would still be valid. We illustrate this procedure for finding the first derivative of a progressive cubic curve in Fig. 2.9.
Original Control Points FIG. 2.9. Recursive algorithm (unnormalized) for computing the first derivative of a progressive cubic curve with knots (
From Corollary 2.3.1 and Fig. 2.9 we can deduce the following generalization to homogeneous progressive curves of the standard two-term differentiation formula for B-spline curves [11]. COROLLARY 2.3.2. Letp(t] = ^2k P^b^i] be a progressive curve with knots (£1, tui),..., (/2n? W2n)} and ^et ^o~ 1 (0? • • "> ^n-i(^) ^e ^le progressive basis for the knots (^2,^2), • • .?(*2n-i?^2n-i)- Then
Proof. From Corollary 2.3.1
Now by the recursive evaluation algorithm for symmetric multilinear polynomials
(see Fig. 2.9). But the recurrence for the last n — 1 levels of Bp(8,t, . . . , £ ) is exactly the same as the homogeneous de Boor algorithm for the progressive
Knot Insertion and Deletion Algorithms
39
curve with these control points relative to the knots ( £ 2 , 1 ^ 2 ) , . . . , (£271-1, ^2n-i)Therefore
More generally,
if p(t] is
— ^2j Pjtfj(t) is a progressive curve and the progressive basis for the knots then we can apply Corollary 2.3.1 to find the coefficients of p(k\i) with respect to the basis b 7 Q ~ k ( t ) , . . . , b]li~kk(t). For by Corollary 2.3.1,
Moreover, the last n — k levels of the recursive evaluation algorithm for Bp(6,..., <$, t , . . . , £ ) are exactly the same as the homogeneous de Boor algorithm for a progressive curve with knots (tk+i, W A H - I ) ) • • • > (*2n-Jk, w-2ji-k)Thus to find the coefficients of p(k\t] with respect to the progressive basis 60~ ( t ) , . . . ,&™I f c (£), we need only compute the values on the A;th level of the recurrence for Bp(6,..., <5, £ , . . . , t). But this procedure is precisely Boehm's algorithm for inserting a k-iold knot at 6. Thus, up to multiplication by the constant factors n\/(n — &)!, computing the &th derivative with respect to 60~ ' ( £ ) , . . . ,&™lj|:(i) is equivalent to inserting a /.'-fold knot at 6. Now suppose we wish to find all the derivatives of a progressive curve p ( t ) at t = a simultaneously. Then by Corollary 2.3.1 we need to compute
But these blossom values are just the coefficients of p ( t ) relative to the knot sequence a , . . . , a, < 5 , . . . , <*>, where a and 6 each appear n times. Thus we can apply Boehm's knot insertion algorithm, or the Oslo algorithm, or even Sablonniere's change of basis algorithm to find the derivatives of p(t] at t — a by converting from control points relative to the progressive sequence ( £ i , w i ) , • • - , (t'2m W'2n) to control points relative to the progressive sequence a,. . ., a, <*>,. . ., 6. The quickest, most efficient, way to perform this computation is first to insert an ?i-fold knot at 6 and then to insert an ?i-fold knot at a. In this way, the first step is completely independent of a. If we save the results of the first step, which actually calculates the coefficients of p(t] relative to the knot sequence (ti, w-i),..., (t n , w n ), < $ , . . . , <$, then we can reuse these coefficients to compute the derivatives of p(t] at other parameter values. Notice too that the fcth level of 7i-fold knot insertion at 6 also gives us the coefficients of p ^ k ' ( t ) with respect to the progressive basis for the knot sequence (tfc+i^^+Oi • • • •> (t'2n-k,w'2n-k)We illustrate this algorithm for finding all the derivatives of a progressive cubic curve in Fig. 2.10. Note that to actually find the derivatives, we must
40
Algorithms for Progressive Curves
multiply the fcth coefficient by nl/(n—&)!. This derivative algorithm, but not its interpretation as knot insertion, is originally due to Boehm [9], who developed it for B-spline curves. Here we have extended it to arbitrary progressive schemes.
Original Control Points FlG. 2.10(a). The first for cubic progressive curves: level of n-fold knot insertion the curve with respect to the
step in Boehm's derivative algorithm (unnormalized) inserting a triple knot at 8. Notice that the kth at 6 provides the coefficients of the kth derivative of progressive basis for the homogeneous knot sequence
(tk+i>wk+i), • • • > (^2n-fc) W2n-k)homogeneous knots.
As in Fig. 2.8, the symbol ' is used to denote
2.3.4. Matching Derivatives at the Knots. The derivatives of adjacent segments of B-spline curves agree at the joints. For simplicity, let us restrict ourselves to knot sequences where each knot appears with multiplicity 1. Then if p(t) is the B-spline curve segment for the knots t\,..., tzn and control points PO, • • • i Pm and q(t] is the B-spline curve segment for the knots t-2,..., tzn+i and control points PI, ..., P n +i, then p ( t ) and q(i) meet with n — 1 matching derivatives at t — tn+\. (If tn+i has multiplicity m and we take q(t) to be the curve segment with knots £ m + i , . . . , ^2n+m and control points P m ,...,P n + m , then p(t] and q(t] meet with n — m matching derivatives at t = i n +i-) Using the results from §2.3.3, we shall now show that this result remains valid for arbitrary progressive schemes. Let us consider then two progressive curves p(i] and q(i] where p(t] is the progressive curve for the knots (t\, w\),..., (tzn? W2n) and control points P o 5 - - - : - f n 5 and q(t] is the progressive curve for the knots (£2, ^2),..., (^2?i+i? ^2n+i) and control points PI, ..., P n +i. Again let us assume for simplicity that each knot has multiplicity 1. Then by Corollary 2.3.1
Knot Insertion and Deletion Algorithms
41
FlG. 2.10(b). The second step of Boehm's derivative algorithm (unnormalized) for cubic progressive curves: inserting a triple knot at a. The values at the base of the triangle are taken from the left edge of the triangle computed in the first stage of the algorithm. The values which emerge along the right edge of the triangle computed in this second stage are, up to constant multiples, the derivatives of the original progressive curve at t = a. Again, as in Fig. 2.10(a), the symbol" is used to denote homogeneous knots.
for k — 0 , . . . , n — 1,
since differentiation and homogenization commute. (We shall always take derivatives with respect to /; symmetric results are valid for derivatives with respect to w.) Moreover, because of the overlap of knots and control points for the curves p(t] and (t), the diagrams of the homogeneous de Boor algorithms for p(t,w) and q(t,w) overlap (see Fig. 2.11). Hence it follows from Ramshaw's blossoming algorithm that the diagrams of the recursive evaluation algorithms for the blossoms of p(t) and q(i] also overlap. Therefore, in particular,
Letting we obtain
42
Algorithms for Progressive Curves
FIG. 2.11. The homogeneous de Boor algorithms (unnormalized) for the homogeneous curves p(t,w) and q(t,w) overlap. hence by Corollary 2.3.1
If wn+i ^ 0, this equality has a simple geometric interpretation. Since p( '(t, w] and g(k'(t, w] are homogeneous polynomials of degree 71 — k, k
Thus equality of derivatives at homogeneous values implies matching of derivatives at corresponding affine values. Similar results hold when (tn+\,wn+\) has multiplicity ra > 1. In this case if we take q(i] to be the curve segment with knots (tn+m, wn+m),..., (*2n+m, ™-2n+m) and control points P m , . . . , P n+m , then p(t) and q(t) meet with n — m matching derivatives at (tn+i/wn+i). The proof is much the same. Now let us consider what happens if (/ n +i, wn+i) = (1 5 0) = 6. By Corollary 2.3.1
Thus, up to constant multiples, for any single polynomial all the derivatives evaluated at 6 are identical to the highest order derivative. Therefore, for any *, Equality of all the derivatives at any affine value would imply that p(f) and q(t) are the same polynomial. But 6 is vector-valued rather than affine, so we cannot conclude that p(t) and q(i) are identical. All we know so far is that the highest order derivatives match. Geometrically we can picture the situation in two ways. Because 6 is a vector, the values p(6) and q(6) are also vectors. Thus even though all the
170
Afternotes Goes to Graduate School
The matrix H defined by (19.2) is sometimes called the matrix Rayleigh quotient, since it is defined in analogy with the scalar Rayleigh quotient. If (//, z) is an eigenpair of H, the pair (/^, Uz] is an eigenpair of A + E. If the pair is well conditioned and E is small, it is a good approximation to an eigenpair of A. The pair (//, Uz} is called a Ritz pair.
Krylov subspaces 10. We have already noted that a step of the power method can easily take advantage of sparsity in the matrix, since it involves only a matrix-vector multiplication. However, the convergence of the powers Alu to the dominant eigenvector can be extremely slow. For example, consider the matrix
whose dominant eigenvector is ei. The first column of Table 19.1 gives the norm of the vector consisting of the last 99 components of A fc u/|| A fc it||2, where u is a randomly chosen vector. After 15 iterations we have at best a two-digit approximation to the eigenvector. 11. On the other hand, suppose we try to take advantage of all the powers we have generated. Specifically, let
Table 19.1. Power and Krylov approximations.
k
I
2 3 4 5 6 7 8 9 10 11 12 13 14 15
Power 9.7e-01 4.9e-01 3.0e-01 2.2e-01 1.7e-01 1.4e-01 l.le-01 9.7e-02 8.1e-02 6.9e-02 5.8e-02 5.0e-02 4.3e-02 3.7e-02 3.2e-02
Krylov 9.7e-01 4.8e-01 2.2e-01 1.3e-01 5.8e-02 3.4e-02 2.1e-02 1.2e-02 4.8e-03 1.6e-03 6.2e-04 1.9e-04 6.3e-05 1.8e-05 3.8e-06
44
Algorithms for Progressive Curves
Often we are given a knot sequence (ii, w\),..., (tn+d, wn+d) and a collection of control points PQ,...,Pd, and we want to construct a sequence of degree n polynomial curves that join together smoothly at the knots. If the knot sequence is affine and nondecreasing, we can form a B-spline curve by stringing together the progressive curves pjt(tf), k = 0 , . . . , d — n, defined by the knots tk+i,..., ^+2?i and the control points /\.,..., Pk+n- Smoothness is maintained by restricting the domain of each curve Pk(t) to the interval [tk+n^k+n+i)- Since the knots are nondecreasing, there is no problem with overlapping domains. But if the knot sequence is not nondecreasing such a construction may not be possible because the domains of various segments might overlap. Thus for some values of the parameter the curve might not be uniquely defined. Moreover, if some knots are vector-valued, then two adjacent curves might not even join together continuously. Thus the restriction to affine, nondecreasing knots guarantees that we can form smooth piecewise polynomial curves with many segments. Notice, however, that this restriction may be relaxed for the first and last n knots, since these knots are not needed to define any of the joints. 2.3.5. Integration. The differentiation formula for progressive curves derived in Corollary 2.3.2 of §2.3.3 can be applied to integrate the progressive basis functions. The antiderivative of a progressive basis function is just a constant multiple of another progressive basis function of one higher degree. As we shall see, this result is a simple consequence of Corollary 2.3.2. Consider the progressive basis feg"1^),..., 6™l](i). The entire , basis depends on the knot sequence (/2? ^2), • • • > (^2n-i, ^2?i-i), but the basis element b%~l(t) depends, at most, only on the knots (tk+i, wk+l],.. .,(tk+n+i,wk+n+i). (For k = 0 the relevant knots are (t^^)? • • • > (^n+i 5 W TI+I)} and f°r k = n — I the relevant knots are (tn,Wn),... ,(^2n-i 5 ^2n-i)- The analysis for these two special cases differs slightly from the discussion of the general case given below.) Therefore if we fix only the basis function b^~l(t], the remainder of the knots are not specified. Thus we can set them any way we please as long as we are careful to form a progressive sequence of degree n. (It may be that the original sequence is progressive for degree n — 1 but cannot be extended to a progressive sequence for degree n; in this case an integration technique other than the one discussed below would have to be used.) We shall take advantage of this observation shortly, but first we need to consider how to differentiate the functions in an arbitrary progressive basis. The B-spline basis functions satisfy a two-term differentiation formula [11]. This result extends easily to arbitrary progressive bases. Indeed by setting Pj = Sjk in Corollary 2.3.2 of §2.3.3 we obtain the following general two-term differentiation formula
The two-term differentiation formula for the B-spline basis functions is just
Knot Insertion and Deletion Algorithms the special case nondecreasing. Now recall (tk+n+2 iWk+n+i} other progressive
45
of this result where Wj = 1 for all j and the knots are that we are free to choose the knots (tk,wk) and any way we please provided that we are careful to form ansequence. But since the original sequence is progressive,
Therefore we can always choose at least one of the two parameters Wk, Wk+n+2 to be zero and still obtain a progressive sequence. If we let either (tk,wk) = (1,0) = 6 or (tn+k+2, wk+n+2) - (1,0) = 6, then we obtain the respective one-term differentiation formulas
Therefore we conclude that, up to an additive constant,
provided wn+i,..., wk+n+l ^ 0, and
provided wk+i, • • •, wn ^ 0. Since we are assuming that the sequence (tk+i, wk+i ) , - • - , (ifc+ n +i, iyfc+n+1) can always be extended to a progressive sequence of length 2n, either wk+i ^ 0 or wk+n+i ^ 0. Therefore at least one of these formulas is always valid. Potentially then we have two ways of integrating b^~l(t]: we can compute either b^(t) or b^+l(t). These basis functions are defined recursively by the homogenized version of (2.2). If we let either (tk,wk) = 6 or (tn+k+2,Wk+n+2} = ^ in (2.2), then we obtain the respective recursion formulas
Again
ways valid.
since
we
are
assuming that the sequence is part of a progressive sequence, either so at least one of these formulas is al-
46
Algorithms for Progressive Curves
Locally B-splines are progressive basis functions. Therefore we can apply our integration formulas for progressive bases to B-splines by applying them segment by segment. For B-splines Wj = 1, j = k + 1,..., k + n + 1, and these integral formulas are essentially equivalent to those given in [14]. If we adopt the perspective of projective geometry where 6 represents a point at infinity, then we can think of the integrals of B-splines as higher-order B-splines with all the same knots and with one additional knot at infinity. Intuitively this result makes sense because integration is a smoothing operation. Thus the integral of a B-spline must be a spline of one higher degree with one more order of smoothness at each knot. Consequently, the integral of a B-spline of degree n should be the B-spline of degree n + I with the same knots. However, the B-spline of degree n + 1 requires one additional knot. We overcome this difficulty by placing the extra knot at 6. This new knot forces the B-spline of degree n -f- 1 to be a positive constant, rather than zero, for values to the right of the support of the corresponding B-spline of degree n. Thus with this additional knot at '8, the B-spline of degree n + 1, when evaluated to the right of the support of the corresponding B-spline of degree n and properly normalized, gives the value of the definite integral of the B-spline of degree n over any interval containing its support—that is, it gives the area under the corresponding degree n B-spline curve. In the case where all the knots are afrme—that is, Wj = 1, j = k + 1,..., fc + n + 1—these results have an interesting interpretation in terms of the triangular diagram for the Cox-de Boor-Mansfield recurrence (Fig. 2.2). When Wk — v>k+n+-2 = 0> the arrow entering b^(t) from the left and the arrow entering b^+1(t) from the right both have the label 1. Therefore by the in-out property of the diagram, the arrow entering b™+m_n(t) from the left and the arrow entering b™+l(t) from the right have the label 1 for all TO. Essentially then these terms are just added into the next lower level until they arrive at either b^(t) from the left or b^+l(t) from the right. Instead of running the recurrence with all these ones, we can simply add these terms together. This observation leads to the following integration algorithm. To integrate the basis function 6£~ 1 (£), simply extend the diagram for the recurrence down one level from the node containing b^~l(i]. Mark the new edges with labels satisfying the in-out property. Now prune the diagram so that the leaf nodes at levelTOhave the indices k-\-m—n and k+l. Let aT, (t} and 7™+i(t) denote these leaf nodes. Then up to constant multiples, the sums ElUn-fc "J+m-nW ^d £;U+i 7^+1(0 are integrals of bnk~l(t). We illustrate this algorithm for integrating a cubic progressive basis function with afrme knots in Fig. 2.12. Notice in particular that this algorithm can be used to integrate both the B-spline and Bernstein basis functions. Moreover, for the Bernstein basis functions, all the labels in the diagram are either 1 — t or t (see Fig. 2.13 in §2.4.2). The factor (tk+n+iWk+i — tk+iWk+n+i) appears both in the integration formulas and in the recurrences. We can simplify these formulas somewhat by K ~T" 771 — 71 V /
Knot Insertion and Deletion Algorithms
FIG. function
2.12.
47
Integration algorithm (unnormahzed) for the progressive basis
multiplying through by this factor. Let
Then
and, up to additive constants,
2.4.
Examples of Progressive Curves
Progressive schemes are defined by the de Boor algorithm in terms of a knot sequence (£1,1^1),... ,(^2n? ^271) subject only to the constraints tiWJ+n — tj+nu>i / 0 whenever 1 < j < i < n. Many interesting curve forms are actually progressive schemes. Here we shall examine four well-known bases and one lesser known basis: the (local) B-spline basis, the Bernstein basis, the power basis, the monomial basis, and the Newton dual basis. We shall show that each of these curve schemes is progressive and then apply the algorithms for progressive curves developed in the preceding sections to generate transformations between these various forms. Transformations
48
Algorithms for Progressive Curves
to the monomial and Newton dual forms are particularly interesting since these change of basis techniques are related to the differentiation algorithms developed in §2.3.3. 2.4.1. B-splines. B-spline segments are the canonical progressive form. Indeed, progressive curves were first introduced as an extension of B-spline curve segments by generalizing the de Boor algorithm (Fig. 2.1) to progressive sequences. For B-spline segments the knots t\,...,t-2n must be affine and satisfy the constraints t{ < tj whenever i < j and t{+n / t{. B-spline segments have the additional property that they automatically join together continuously to form smooth piecewise polynomial curves. B-spline curves are extremely popular and have been studied extensively [11], [13], [17], [25]. One of our main motives for investigating progressive curves is to find techniques for transforming ^-spline curves to and from other piecewise polynomial forms. 2.4.2. Bernstein Polynomials. The Bernstein basis is a special case of the B-spline basis where the knots are a , . . . , a, 6 , . . . , 6, 6 ^ a. Explicitly the Bernstein basis functions #Q ( £ ) , . . . , B^(i) are given by
These basis functions satisfy the recurrence relationship
By the de Boor-Fix formula, the Bernstein basis is self dual. That is, the corresponding Polya basis functions
Thus, up to constant multiples, the Polya basis corresponding to the Bernstein basis is the Bernstein basis with the order reversed. The Bernstein basis functions generate Bezier curves over the interval [a, b]. Bezier curves are fundamental in the study of computer aided geometric design. They have been investigated extensively [9], [17] and have many useful geometric properties [17]. For the knot sequence a , . . . , a, 6 , . . . , 6, the de Boor algorithm for B-spline curves specializes to the de Casteljau algorithm for Bezier curves. We illustrate the de Casteljau algorithm for cubic Bezier curves with the knot sequence 0,0,0,1,1,1 in Fig. 2.13. Fix a polynomial curve p(t}. It follows from (2.8) that the dual functionals of the Bernstein basis are given by
where a appears as an argument n — i times and 6 appears i times. We can use this formula to compute the Bezier control points of any polynomial
Knot Insertion and Deletion Algorithms
FIG. 2.13.
49
The de Casteljau algorithm for cubic Bezier curves.
curve by finding its blossom and evaluating at the knots. One of the initial applications of knot insertion schemes and Sablonniere's tetrahedral algorithm was to transform B-spline curves into piecewise Bezier form by forcing each knot to have multiplicity n. The inverse operation—transforming smooth piecewise Bezier curves into B-spline form—can also be accomplished using these techniques. 2.4.3. The Power Basis. Consider the affine knot sequence i i , . . . , / n , t 0 , • • -,£71-1 subject to the constraints tl ^ tj whenever i ^ j. This sequence is clearly progressive, although it does not satisfy the usual B-spline constraints. We shall show that this progressive sequence generates curves whose blending functions are given by the (normalized) power basis Ro(t}^ . . ., Rn(t) at the nodes to,. . .,tn', that is, by the functions
We can obtain this formula from several different, but illuminating, points of view; therefore we shall provide several distinct derivations of this result. The first proof is by induction on n using the recursive evaluation algorithm for progressive curves. For n — 1 the result is obvious. Now assume the result is true for n — 1. To find the fcth blending function, consider the de Boor algorithm with the coefficients 0 , . . ., 0, 1, 0,. . ., 0—that is, with a 1 in the fctri position and a 0 everywhere else. For the knot sequence t\,. . ., tn, /o,. . . , tn-\, the last level of the recursive evaluation algorithm is
50
Algorithms for Progressive Curves
In §2.2.1 we showed that, in general, '1S a progressive curve with control points and knots is a progressive curve with control points and
knots
Let ^[^o,.. .,tn](t) denote the dependence of R^(t) on the nodes Then by the inductive hypothesis applied to the knot sequences
Therefore for A; = 1,..., n — 1
For k = 0,n the same result follows easily by a similar calculation. Thus the progressive basis corresponding to the knot sequence t\,..., £ n ,£o> • • • ^n-i is indeed the power basis #o(0> • • • •> -^n(0 a^ ^ ne n°des to,.. .,tn. We illustrate the de Boor algorithm for cubic curves in power basis form in Fig. 2.14. Another way to derive the explicit formula for R^(i] is to evaluate the derivatives of R^(t) at t = t^. Let B^iui,..., un) be the blossom of R%(t). By Corollary 2.3.1 of §2.3.3,
But we can compute B%(6,..., , t^,..., t^ recursively by differentiating any jf levels of the recursive evaluation algorithm for K%(t} and substituting t — t^ on the remaining n — j levels. Let us always choose to differentiate the top j levels of the triangle. Since the two labels on the edges exiting from /\. at the base of the triangle are ±(i — £&), we get
Knot Insertion and Deletion Algorithms
51
FIG. 2.14. The de Boor algorithm (unnormalized) for cubic curves in power basis form. so it follows immediately that
Because R^(t) is a polynomial of degree 71, we must have
for some constants c£. Now we can use blossoming to compute the constants c£. These constants are chosen so that the progressive basis functions RQ ( £ ) , . . . , R™(t) sum to one; thus
Blossoming both sides of this equation, we obtain
Now let
Then all of the terms in the summation except one vanish, and we are left with
Thus we conclude that
52
Algorithms for Progressive Curves
Another way to proceed is to use the de Boor-Fix formula. The dual basis to the power basis is
Thus, up to constant multiples, the dual basis to the power basis is the Lagrange basis. We could derive the fact that the progressive basis corresponding to the knot sequence ii,.. . ,£ n ,£o, • • -^n-i is the power basis RQ^)^ .. .,R™(t) directly from the de Boor-Fix formula [2]. Indeed, this result was first discovered by examining the de Boor-Fix formula, noticing that the knot sequence ti,..., i n , £o, • • • 5 tn-i generates the Lagrange basis as the dual basis, and then using the de Boor-Fix dual functionals to find the corresponding progressive basis. A still easier derivation is simply to substitute t = tk into Marsden's identity when the Lagrange basis is the dual basis. Let p(i] be a polynomial curve. From (2.8) the dual functionals of the power basis in blossoming form are given by
We can use this formula to compute the power basis coefficients of any polynomial by evaluating its blossom at the nodes. For example, if p(i] = i fc , then
Therefore
Moreover, we can use either Boehm's knot insertion algorithm, or the Oslo algorithm, or Sablonniere's algorithm to convert from B-spline to power form. We can also use these algorithms to convert between the Bezier and power forms or from the power basis at one set of nodes to the power basis at a different set of nodes. 2.4.4. The Monomial Basis. Consider again the explicit and recursion formulas for the Bernstein basis functions
If we delete the factors (b — t] and (6 — a), then we get explicit and recursion formulas for the monomial basis at a:
Knot Insertion and Deletion Algorithms
53
Similarly, removing the factors (b — i] and (6 — a) from the de Casteljau algorithm for Bezier curves yields a recursive evaluation algorithm for the monomial curves
But notice that this recursive evaluation algorithm is simply the homogeneous de Boor algorithm with knots ( a , . . . , a , < 5 , . . .,<*)) evaluated at (t, 1). Indeed, replacing the affine knots 6 , . . . , 6 with the vector-valued knots < * > , . . . , < $ has the effect of replacing the factors (b — t] and (6 — a) by (1 — 0) — 1. Therefore the monomial basis is the progressive basis for the progressive sequence ( a , . . . , a, <5, . . . , < ! > ) . We illustrate the recursive evaluation algorithm for cubic monomial curves with a = 0 in Fig. 2.15.
FlG. 2.15. form.
The homogeneous de Boor algorithm for cubic curves in monomial
In the homogeneous version of Marsden's identity and the de Boor-Fix formula, we have from (2.12) that the dual basis is
Thus affine knots a are interpreted, as usual, to yield factors of (a — i), but vector-valued knots 8 are interpreted, as in the de Boor algorithm, as factors of (1 — 0) = 1. Hence, just like the Bernstein basis, the monomial basis is self dual. That is, the corresponding Polya basis functions
Thus, up to constant multiples, the Polya basis corresponding to the monomial basis is the monomial basis with the order reversed. The blossoming form of the dual functionals for the monomial basis is
where 8 appears as an argument i times and a appear n — i times. But by Corollary 2.3.1 of §2.3.3, these coefficients are, up to constant multiples, the
54
Algorithms for Progressive Curves
derivatives of p ( t } . This observation is, of course, no great surprise because by Taylor's theorem
Thus finding the monomial coefficients and finding the derivatives of p(t) are essentially the same problem. Since the monomial basis is a progressive basis, this problem can be solved by using knot insertion algorithms. Because the monomial basis is a progressive basis, we can use Boehm's knot insertion algorithm, or the Oslo algorithm, or Sablonniere's change of basis algorithm to transform curves from monomial form to any other progressive form. In particular, to convert from monomial form to Bezier form, we need only perform multiple knot insertion so that the last n knots have the value 6. But recall from §2.2.3 that n-fold knot insertion is equivalent to evaluation. Thus this change of basis algorithm is equivalent to running the de Boor evaluation algorithm for the monomial basis and then reading the Bezier coefficients off the left edge of the diagram. Moreover, on the right edge of the diagram lie the coefficients for the monomial basis at 6. In the classical literature this algorithm for finding derivatives of the monomial form is known as synthetic division [25]. Thus, for the monomial basis, evaluation, conversion to Bezier form, and differentiation are all essentially one and the same procedure. We illustrate the knot insertion algorithm for converting cubic curves from monomial to Bezier form with a = 0 and b = 1 in Fig. 2.16. Notice that this diagram is just Pascal's triangle in reverse with the label 1 along every edge. Thus using knot insertion, we have derived the well-known formula [17]
Generally, when working with the monomial form, we have the coefficients S^ of tk rather than the coefficients Pk of (nk)tk. Since Pk = S k /(l), k = 0,..., ra, there is little difficulty in applying these transformation algorithms to curves given in the more standard monomial form. If we want to go in the other direction and convert from Bezier to monomial form, we must perform multiple knot insertion at 6. To insert knots at <$, we must first homogenize the de Casteljau algorithm by replacing (b — t) by (bw — i) and (t — a) by (t — aw) and then substitute 6 = (1,0) for (2, w). The net effect of these operations is that
Notice that the monomial coefficients at t = a emerge along the left edge and that the monomial coefficients at t = b emerge along the right edge of the diagram. Thus this algorithm simultaneously computes, up to constant multiples, the derivatives of a Bezier curve at its left and right end points.
Knot Insertion and Deletion Algorithms
FlG. 2.16. Converting a cubic curve from monomial a triple knot at t = 1. The original monomial coefficients, basis (k}tk, are placed at the base and the Bezier control along the left edge of the triangle. Along the right edge monomial coefficients (derivatives) RQ,...,RS at t — 1. just Pascal's triangle in reverse.
55
to Bezier form by inserting PQ, . . . , PS, relative to the points, Qo,...,Q3, emerge of the triangle emerge the Notice that this diagram is
We illustrate the knot insertion algorithm for converting cubic curves from Bezier to monomial form with a — 0 and b = 1 in Fig. 2.17. Notice that this diagram is simply differencing. Usually when working with the monomial form, we want the coefficient Sk of tk rather than the coefficient P^ of (^)tk. Again these coefficients are easily found since Sk — (fc)-Pjb fc = 0 , . . .,n.
FlG. 2.17. Converting a cubic curve from Bezier to monomial form by inserting a triple knot at 8. The original Bezier control points, Qo,---,Q3, are placed at the base of the triangle. The monomial coefficients, PQ, ..., PS, relative to the basis (fytk, emerge along the left edge and the monomial coefficients Ro,...,R3, relative to the basis ( k } ( t — l)k emerge along the right edge of the triangle. Thus the same algorithm simultaneously finds all the derivatives of the Bezier curve at both of its end points. Notice that this diagram is just a differencing triangle. 2.4.5. The Newton Dual Basis. To find all the derivatives of a progressive curve p(i] with affine knots t\,... ,t2 n , we f° un d in §2.3.3 that it is most efficient first to compute the coefficients of p(t) relative to the progressive basis
56
Algorithms for Progressive Curves
for the knots t i , . . . , £ n , £ , . . . , # . We call this progressive basis the Newton dual basis for reasons which will become clear shortly. We illustrate the homogeneous de Boor algorithm for cubic curves in Newton dual form in Fig. 2.18. Notice that the recursive evaluation algorithm for curves in Newton dual form is the same as the recursive evaluation algorithm for curves in monomial form except that factors of (t—a) are replaced by factors of (t — t^} on the right-hand side of the recursion.
FIG. 2.18. The homogeneous de Boor algorithm for cubic curves in Newton dual form.
Let N^(t),...,N^(t) denote the Newton dual basis for the knots tn-m+i , . . . , i n , ,...,. Then the Newton dual basis is defined by the recursion formula
Notice again that the recursion formula for the Newton dual basis is the same as the recursion formula for the monomial basis except that factors of (t — a) are replaced by factors of (t — tn-m+k) on the right-hand side of the recursion. It follows from the recursion formula by induction on m that the Newton dual basis functions are given explicitly by
In Marsden's identity and the de Boor-Fix formula, terms involving vectorvalued knots at 8 are interpreted, just as they are in the de Boor evaluation algorithm, as factors of 1 rather than (tk — t) because (tk — tWk) = I when (tk,Wk] = 8 = (1,0). Thus by (2.12), for the knot sequence / i , . . . , £ 7 l , < $ , . . . , # , the dual basis is
Therefore, up to sign, the dual basis to the progressive basis with knots ti,..., t n , 8,..., 8 is the Newton basis with nodes / i , . . . , tn. It is for this reason
Knot Insertion and Deletion Algorithms
57
that the progressive basis with knots i i , . . . , tnj < 5 , . . . , 6 is called the Newton dual basis. From Marsden's identity, we have
By definition, the divided difference /[^,...,i n ] is the coefficient of (t — tk+i] • • - ( t — tn) in the Newton expansion of f ( t ] [11]. Thus for all x,
Equating the coefficients of the Newton basis in these two formulas for (x — i]n gives us the divided difference formula
for the Newton dual basis. Notice that on the right-hand side of (2.14) the divided difference is taken with respect to £, and x is treated as a constant. The blossoming form of the dual functionals for the Newton dual basis is
These values are precisely the values calculated in the first stage of Boehm's derivative algorithm (see §2.3.3). Thus the coefficients of p ( t ) relative to the Newton dual basis are computed to speed the calculation of the derivatives for arbitrary progressive curves. To find the derivatives of a curve p ( t ) = Y^j PjN™(t) represented in Newton dual form is particularly simple. Let NQ~k(t),..., N£~£(t) be the Newton dual basis relative to the knots i f c + i , . . . ,£ Tl , < 5 , . . . , 6. By Corollary 2.3.2 of §2.3.3 the first derivative for a progressive curve p ( t ) = ^k Pkb^(t} is given by the standard two-term formula
For curves in Newton dual form this formula reduces to the one-term formula
since (^. +n+1 ,iy)t +n+1 ) = 6 = (1,0) and 1(^+1 = 1, fc = 0 , . . . ,7i— l,for the Newton dual basis. (This one-term differentiation formula also follows by duality [2] since the Newton basis has a trivial one-term degree evaluation formula.) Thus the coefficients of the first derivative are just n times the coefficients of the original curve after we drop the coefficient PQ. Now it follows
58
Algorithms for Progressive Curves
easily by induction on k that the coefficients of p(k\t} relative to the basis NS-k(t),..., CoWare
That is, we simply take the last n -f 1 — k coefficients and multiply them each by the same normalizing factor [n\/(n — &)!]. Notice that identical results are true for progressive curves in monomial form since the monomial form is a special case of the Newton dual form. For the monomial form, of course, these results can be verified explicitly by a simple computation. To find all the derivatives of a progressive curve p(t) at some parameter value t = a, we must compute
where 6 appears as an argument k times. Ordinarily for progressive curves we would first insert an n-fold knot at 6 and then insert an 7i-fold knot at a. However, curves in Newton dual form already have an n-fold knot at 6. Therefore only n-fold knot insertion at a is required to find all the derivatives of p(t] at t — a. But recall from §2.2.3 that n-fold knot insertion is equivalent to evaluation. Thus this derivative algorithm is equivalent to running the de Boor evaluation algorithm for the Newton dual basis and then reading the monomial coefficients off the right edge of the diagram (see Fig. 2.18). In §2.4.4 we saw that an identical result holds for curves in monomial form. Like differentiation, knot insertion is also simpler for curves in Newton dual form because there are fewer arithmetic operations in the de Boor algorithm. Indeed for the Newton dual basis the homogeneous de Boor algorithm
where r = 1,..., n and i — r , . . . , n reduces to the simple recurrence
since (^ +n+ i_ r , w t - +n+1 _ r ) = 6 = (1,0) and W{ — 1, i = 1,.. .,n. Therefore, by Ramshaw's blossoming algorithm, computing blossom values recursively is also simpler for curves represented in Newton dual form. Thus, like differentiation, it is often faster to convert to Newton dual form in order to perform knot insertion on arbitrary progressive curves. For further details on fast knot insertion algorithms using the Newton dual form, see Chapter 3 [5]. To be consistent with our treatment of other progressive curves, we have derived the properties of curves in Newton dual form using only blossoming and the de Boor algorithm. However, we could also have derived virtually all of these properties from the divided difference formulation of the Newton dual basis given in (2.14). For example:
Knot Insertion and Deletion Algorithms
59
de Boor Algorithm—which follows from applying Leibniz's rule to Boehm Knot Insertion Algorithm—which follows from applying the divided difference recurrence to and solving for One-Term Derivative Formula (Integration]—which follows from differentiating and observing that differentiation and divided difference commute. Marsden Identity—which follows from the definition of divided difference. de Boor-Fix Formula—which follows from applying divided difference to Taylor's theorem and using the one-term differentiation formula. The Newton basis has many interesting and valuable properties. It has uses in polynomial interpolation, and it can be applied in fast forward differencing to rapidly compute evenly spaced points along a polynomial curve. We suspect that because of duality the Newton dual basis also may have many additional interesting properties. One application we are currently exploring is the dual of forward differencing, which leads to a fast knot insertion algorithm for progressive curves [5, Chap. 3, this volume]. Other properties of the Newton dual basis are topics for current and future investigations. 2.4.6. Summary. We summarize our examples of progressive schemes briefly in the Table 2.1. 2.5.
Conclusions and Open Questions
B-spline curves of degree n are defined locally by the de Boor algorithm in terms of knot sequences / i , . .. ,/2n satisfying the B-spline constraints t% < t^ whenever i < j, and ti+n ^ t{. Analogously, progressive schemes of degree n are defined by the homogeneous de Boor algorithm in terms of progressive sequences (ii, w\),..., (tin, W2n) satisfying the progressive constraints tj+nwi ^ tiWj+n whenever 1 < j< i < n. When diagrammed as a graph, the de Boor recursive evaluation algorithm has an idiosyncratic in-out structure; unnormalized labels which enter a node also leave the node in the same direction. One of the key insights of blossoming is that symmetric multiaffine polynomials satisfy a recurrence with a similar characteristic structure. The algorithms and formulas developed here depend only on this in-out structure. The main theme of this chapter is unification. We showed that many well-known local formulas and procedures for B-spline curves do not require the knots to be nondecreasing or even affine. This observation allows us to unify many algorithms, derivations, and representations by generalizing from B-splines to arbitrary progressive schemes. We have shown that de Boor's evaluation algorithm, Ramshaw's blossoming algorithm, Boehm's knot insertion algorithm, the Oslo algorithm, the standard two-term differentiation
TABLE 2.1 Examples of progressive bases.
Curve
Basis functions
Knot sequence
Conditions
Dual basis
Progressive
*****
Polya
B-spline
*****
(Homogenized) Polya
Bezier
Bernstein
Power
Lagrange
Monomial
a affine
Newton dual
t z affine
Monomial 1
Newton
Knot Insertion and Deletion Algorithms
61
algorithm, Boehm's derivative algorithm, a classical integration formula, Sablonniere's change of basis procedure, Marsden's identity, and the de BoorFix form of the dual functionals all extend readily to progressive curves. We have also provided several interesting examples of progressive bases including the B-splines, the Bernstein basis, the power basis, the monomial basis, and the Newton dual basis. For the power basis the knot sequence is not nondecreasing, and for the monomial and Newton dual bases, half the knots are not even affme. Nevertheless, results initially developed only for B-spline curves remain valid for these more general progressive schemes. If unification was our main theme, then blossoming, homogenization, and knot insertion were our leitmotifs. In [21]-[23] Ramshaw developed the theory of blossoming and homogenization for B-spline and progressive curves. Here we have tried to flesh out this theory by providing algorithms and examples. All of the algorithms listed above were derived using blossoming and homogenization, and all the change of basis procedures, including differentiation, could be interpreted as knot insertion procedures. These change of basis procedures are especially significant; one of our main motives for studying progressive curves was to find techniques for transforming B-spline curves into other piecewise polynomial forms. We noticed too that formulas for progressive curves often simplify when they are homogenized and some of the homogeneous coordinates are set to zero. This device was used to derive an integration algorithm for progressive basis functions; it was also applied to construe the monomial and Newton duals forms as progressive curves. Generalizing B-spline results to the monomial and Newton dual bases is particularly important. The monomial basis is by far the simplest and most common polynomial basis. It is enlightening to know that many Bspline formulas are not exotic results, but are actually extensions of very easy formulas that are valid even for the simple monomial form. Viewing the monomial and Newton dual bases as progressive schemes allows us to use knot insertion procedures to transform B-spline curves to the monomial and Newton dual forms. These transformations lead to fast differentiation algorithms and knot insertion procedures. The Newton dual basis plays a critical role in these procedures. Therefore we have taken this opportunity to emphasize its importance, to highlight some of its properties, and to make this central role a bit more explicit. Further details on the relationship between fast knot insertion procedures and the Newton dual basis are provided in Chapter 3 of this book [5j. Several intriguing open questions remain. Relaxing the B-spline constraints that the knots must be aifine and nondecreasing leads to additional interesting progressive schemes and helps to unify much of the theory of polynomial and piecewise polynomial curves. Are there any other interesting ways to relax the B-spline constraints? Another possible generalization would be to permit complex knots. Does this extension lead to any useful or interesting polynomial bases?
62
Algorithms for Progressive Curves
One way to search for interesting and novel progressive schemes is to look for interesting dual bases. Recall that the power basis is dual to the Lagrange basis and the Newton dual basis is dual to the Newton basis. Since the formulas for the dual basis are quite explicit, such bases may be easier to discover. For example, up to constant multiples, the cubic Hermite basis is the dual basis for the progressive sequence |, 0,0,1,1, — \. This observation suggests that the corresponding progressive basis may also have interesting analytic properties. We would like to know what other interesting or well-known polynomial bases are actually progressive or dual to progressive bases. Any such schemes would certainly be related to B-splines in some interesting way and if progressive would immediately inherit all of the algorithms and formulas developed here for progressive curves. Finally, there are two other well-known spline schemes to which it would be useful to generalize blossoming: geometrically continuous splines and multivariate splines. For geometrically continuous splines analogues of Marsden's identity and the de Boor-Fix dual functionals are known [3]. These formulas can be used to derive an analogue of the blossom for geometrically continuous splines, but the blossoming formulas are much more complicated than for progressive curves. Nevertheless these formulas can be used, at least in theory, to derive evaluation and knot insertion algorithms for geometrically continuous splines [7]. An alternative, more geometric, approach to blossoming geometrically continuous splines is given by Seidel in [28]. On the other hand, analogues of blossoming for general multivariate splines are not currently known, and this problem remains an important open question. Acknowledgments We would like to thank Professor Tony DeRose of the University of Washington for explaining to us the full workings of Sablonniere's tetrahedral change of basis algorithm, and for pointing out to us the relationship between differentiation and evaluation at vector-valued knots. This work was supported in part by Natural Sciences and Engineering Research Council of Canada grant OGP0036825 and by National Science Foundation grant CCR-9113239. References [1] P. J. Barry, Urn models, recursive curve schemes, and computer aided geometric design, Ph.D. dissertation, Dept. of Math. Univ. of Utah, Salt Lake City, UT, 1987. [2] P. J. Barry, A. D. DeRose, and R. N. Goldman, B-splines, Polya curves, and duality, J. Approx. Theory, 65 (1991), pp. 3-21. [3] P. J. Barry, N. Dyn, R. N. Goldman, and C. A. Micchelli, Polynomial identities for piecewise polynomials determined by connection matrices, Aequationes Mathematica, 42 (1991), pp. 123-136. [4] P. J. Barry and R. N. Goldman, Interpolation and approximation of curves and surfaces using Polya polynomials, CVGIP: Graphical Models and Image Processing,
53 (1991), pp. 137-148.
Knot Insertion and Deletion Algorithms
63
[5] P. J. Barry and R. N. Goldman, Factored knot insertion, this volume, pp. 65-88. [6] , Knot insertion algorithms, this volume, pp. 89-133. [7] P. J. Barry, R. N. Goldman, and C. A. Micchelli, Knot insertion algorithms for piecewise polynomial spaces determined by connection matrices, Adv. in Comput. Math., to appear. [8] W. Boehm, Inserting new knots into B-spline curves, Comput. Aided Design, 12 (1980), pp. 199-201. [9] W. Boehm, G. Farin, and J. Kahmann, A survey of curve and surface methods in CAGD, Comput. Aided Geom. Design, 1 (1984), pp. 1-60. [10] C. de Boor, On calculating with B-splines, J. Approx. Theory, 6 (1972), pp. 50-62. [11] , A Practical Guide to Splines, Springer-Verlag, New York, 1972. [12] C. de Boor and G. Fix, Spline approximation by quasi-mterpolants, J. Approx. Theory, 8 (1973), pp. 19-45. [13] C. de Boor and K. Hollig, B-splines without divided differences, in Geometric Modeling, G. Farin, ed., Society for Industrial and Applied Mathematics, Philadelphia, PA, 1987, pp. 21-27. [14] C. de Boor, T. Lyche, and L. Schumaker, On calculating with B-splmes, II. integration, Proceedings Oberwolfach Conference, Numerische Methoden der Approximationstheorie, L. Collatz, G. Meinardus, and H. Werner, eds., Birkhauser, 1976, pp. 123-146. [15] P. de Casteljau, Formes a Poles, Hermes, Paris, 1985. [16] E. Cohen, T. Lyche, and R. Riesenfeld, Discrete. B-splines and subdivision techniques in computer-aided geometric design and computer graphics, Computer Graphics and Image Processing, 14 (1980), pp. 87-111. [17] G. Farin, Curves and Surfaces for Computer Aided Geometric Design: A Practical Guide, Academic Press, San Diego, CA, 1988. [18] R. N. Goldman, Blossoming and knot insertion algorithms for E-spline curves, Comput. Aided Geom. Design, 7 (1990), pp. 69-81. [19] T. Lyche and K. Morken, Making the Oslo algorithm more efficient, SIAM J. Numer. Anal., 23 (1986), pp. 663-675. [20] M. J. Marsden, An identity for spline, functions with applications to variationdiminishing spline approximation, J. Approx. Theory, 3 (1970), pp. 7-49. [21] L. Ramshaw, Blossoming: A Connect the Dots Approach to Splines, Digital Systems Research Center, Report 19, Palo Alto, CA, 1987. [22] , Beziers and B-splines as multiaffine maps, in Theoretical Foundations of Computer Graphics and CAD, R. A. Earnshaw, ed., NATO ASI Series F, 40, 1988, Springer-Verlag, New York, pp. 757-776. [23] , Blossoms are polar forms, Comput. Aided Geom. Design, 6 (1989), pp. 323358. [24] P. Sablonniere, Spline and Bezier polygons associated with a polynomial spline curve, Comput. Aided Design, 10 (1978), pp. 257-261. [25] L. L. Schumaker, Spline, Functions: Basic Theory, Wiley, New York, 1981. [26] H. P. Seidel, Knot insertion from a blossoming point of view, Comput. Aided Geom. Design, 5 (1988), pp. 81-86. [27] , A new multiaffine approach to E-splmes, Comput. Aided Geom. Design, 6 (1989), pp. 23-32. [28] , Polar forms for geometrically continuous spline curves of arbitrary degree, Trans. Graphics, 12 (1993).
This page intentionally left blank
CHAPTER _3_
Factored Knot Insertion Phillip J. Barry and Ronald N. Goldman
3.1.
Introduction
One common mathematical technique for applying an operation to a curve efficiently is to convert the curve into a form in which that operation can be accomplished rapidly. For example, to evaluate, at £, a degree n parametric polynomial curve
written in terms of a basis $j(t), one may convert it to Newton form (see, e.g., [9]) over a sequence of nodes s\,.. . ,s n ,
and evaluate by nested multiplication. This evaluation procedure will produce as a by-product the divided differences G[i, Si,. . ., Sj] for j = 0 , . . . , n — 1, i.e., the coefficients of G in Newton form over the sequence f, si,. . ., sn-\. Iteration of this observation (see Fig. 3.1) underlies techniques such as evaluation by fast forward differencing (see, e.g., [4], [15]). While there is a start-up cost for performing the transformation to the initial Newton representation, evaluations in the new representations may be faster than evaluations in the old one. Thus if enough evaluations are done, and if other effects such as the introduction or magnification of round-off error can be minimized, effecting the transformation and evaluating the new form may be more efficient than doing the evaluations in the original form. The above example is but one instance of this very general technique. In this chapter we introduce another instance by providing an algorithm which, in many cases, inserts knots into B-spline curves more rapidly than known knot insertion procedures such as Boehm's knot insertion algorithm [5] or the Oslo algorithm [13], [16], [23]. This algorithm is important for two reasons: (1) often it decreases the number of computations, and (2) the algorithm and 65
66
Knot Insertion and Deletion Algorithms
FlG. 3.1. Evaluation of a cubic curve by differencing and repeated nested multiplication. The algorithm begins with points G[si], i = 1 , . . . , 4 , on the curve, then uses differencing to obtain part (a) of the figure. Following this computation, repeated nested multiplication yields the points G[SJ], i = 0 , — 1 , . . . , as shown in part (b). The arrows indicate which differences are combined in calculating new differences.
Factored Knot Insertion
67
its surrounding theory develop and unify some aspects of the theory of curves in computer aided geometric design and computer graphics. This chapter is structured as follows: §3.2 contains the algorithm in its rawest form. We prove, in §3.3, that the algorithm actually works, and we discuss some alternative derivations in §3.4. Section 3.5 is devoted to the advantages and disadvantages of the algorithm, §3.6 to a discussion of variants of the algorithm, and §3.7 to summarizing our results and to concluding remarks. 3.2.
The Algorithm
Because of the simplicity of the algorithm, we introduce it presently, delaying a discussion on its proofs, merits and drawbacks, optimizations, etc., to subsequent sections. Begin with a degree n B-spline curve with original control vertices {Vi} over a knot vector {ti}. Let {^} be a refinement of {ti} such that TI = t { , i < q, and Ti — ti-m,i > q + ra. That is, {rz-} is a refinement of {ti} obtained by adding m new knots in the interval [tq,tq+i]. The following algorithm finds new control vertices {Wi} for the curve represented over (TJ).
This procedure is diagrammed in Fig. 3.2 (all figures in this chapter illustrate the cubic case). Step 1 is merely a relabelling of the relevant knots and control vertices, while Step 3 relabels some auxiliary points. Step 2 can be considered a start-up transformation. The heart of the algorithm is Step 4, where the new control vertices are actually found. Note that the algorithm is a local procedure; if knots are to be inserted in more than one original knot interval,
68
Knot Insertion and Deletion Algorithms
FlG. 3.2. The algorithm. The arrows indicate the manner in which the values are combined. Steps 1-3 are shown in part (a), Step 4 in part (b).
Factored Knot Insertion
FlG. 3.3.
69
Step 4' of the alternative arrangement of the algorithm.
they are dealt with interval by interval, left to right (always working with an updated knot vector and control vertex list). There is an alternative ordering of the computations. Let PJ1 = P® and replace Step 4 with Step 4'.
This arrangement is shown in Fig. 3.3. Most observations below apply to both versions of the algorithm, perhaps with small changes. Although we shall work primarily with the original version, in some contexts it is easier to employ the alternative version. Because the procedure "factors" the knot insertion process through the form obtained in Step 2, we call it the "factored knot insertion algorithm." Although it was discovered independently, it is very similar to an algorithm recently developed in [26], and can therefore be viewed as a variant of that algorithm. However, the derivation, analysis, and further development of the algorithm in this paper is substantially different from that in [26].
70
3.3.
Knot Insertion and Deletion Algorithms
Proof
There exist many derivations of the factored knot insertion algorithm. To the best of our knowledge there is no absolutely trivial proof; however, none of the proofs is difficult and each adds a valuable perspective. The approach we will take in this section is based on a technique called "blossoming," which was pioneered by Ramshaw and others [12], [24], [25], [28]. Some alternative approaches are explored briefly in the following section. The central idea in blossoming is to associate with any degree n (or less) univariate polynomial its "blossom" (also known as its "polar form" [25]), the unique n-variate multiaffine symmetric polynomial that when evaluated on its diagonal reduces to the original polynomial. Because of its simple properties, it is often easier to work with the blossom than the original polynomial. In this paper we will work with a variant of the blossom called the "multilinear blossom." The multilinear blossom of a degree n (or less) univariate polynomial G(i) is a homogeneous polynomial BG[(U\,W\), . . . , (w 7l , wn)] with the following properties: multilinear: for any i and constants a\,a>2
symmetry: if TT is any permutation of {1,..., n}, then
diagonal
If G is a piecewise polynomial, then the blossom will be a piecewise polynomial. If G is a B-spline curve, we also have the following important property: dual functional: let G be a B-spline curve with knots {ui} and control vertices {W7;}. Then
where the blossom segment used is any blossom segment affected by W{. When inserting knots into an original knot interval, the corresponding segment of the curve is subdivided into many subsegments. However, since each of these represents a portion of the original segment, the blossoms for the subsegments are identical to the blossom for the original segment. Therefore, we can use the dual functional result to express new control vertices resulting from inserting knots in an original knot interval in terms of the blossom segment
Factored Knot Insertion
71
corresponding to that interval. Thus, since the algorithm works with only one interval at a time, we need be concerned with only one blossom segment at a time. For further information on blossoming in general, and the multilinear blossom in particular, the reader is referred to [17, Chap. 2 of this volume], [24]. In order to prove that the algorithm works we will use the above properties of the multilinear blossom. There are two steps in the proof. The first step is to show that in Step 2
This is done by induction on j. V® = BG[(tq-n+i+i-, 1), • • - , (tq+i, 1)] by the dual functional property of the multilinear blossom so the statement is true for j = 0. Further, by the recursive definition of K/, the induction hypothesis, and the multilinearity and symmetry of the multilinear blossom, we get
Figure 3.4 illustrates Step 2 of the algorithm in terms of the computed blossom values. Notice that upon completion of Step 3 we have the values PJ) = £ G [ ( 5 , + 1 , l ) , . . . , ( 5 n , l ) , ( l , 0 ) , . . . , ( l , 0 ) ] . In the second part of the proof we show that
Again we use induction, this time on i. When i = 0 the statement is true. The recursive definition of Pj, the induction hypothesis, and the multilinearity and symmetry properties of BQ yield
72
Knot Insertion and Deletion Algorithms
FIG. 3.4. Step 2 of the algorithm in terms of the blossom values. To conserve space in this and subsequent figures, we write (£, 1) as t, (1,0) as 8, and BG[CK,^J] as a@j. The labels on the arrows are the coefficients used in the algorithm.
Factored Knot Insertion
73
FIG. 3.5. Step 4 of the algorithm in terms of blossom values.
To conclude the proof, note that
Figure 3.5 illustrates Step 4 of the algorithm in terms of the blossom values. Thus we have shown that the algorithm does indeed provide the desired values. Perspectives on why it provides the desired values will be one of the themes of the next section. 3.4.
Other Perspectives
In the previous section we demonstrated that the algorithm works by showing that the results are the correct points. In this section we sketch a few alternative derivations. These other approaches are important not only because they provide alternative means of obtaining the algorithm, but also because they yield higher level perspectives and thus give more insight into why the algorithm works. Additionally, they connect the algorithm with other known procedures. Knot insertion: The multilinearitv of Bn vields:
The proof in §3.3 consists mainly of applying two specializations of this identity. If we take w% = 1 for all z, and c\ = c^ = 1, the multilinear blossom becomes the multiaffme blossom and the above result is essentially Boehm's knot insertion formula [27]. Therefore, formula (3.1) can be viewed as a multilinear variant of Boehm's knot insertion formula. We now elaborate on this observation.
74
Knot Insertion and Deletion Algorithms
In the multilinear blossom, rather than working with the knot values s t -, we work with pairs (s^iy;). In the algorithm, every pair has either Wi = 0 or Wi = 1. When u>; = 1 the pair (s;,w;) can be identified with the knot s z -, which represents a point of the affine line (the domain of the spline curve is a subset of the affine line). On the other hand, the pairs with w^ — 0 represent vectors defined as the difference of two points on the affine line. Therefore, in this context, the curve will have knots which can be interpreted as vectors, in addition to knots which are points. Since factored knot insertion is applied to the curve one interval at a time, we need not attend to such concerns as how (and where) to smoothly link spline segments dependent on one or more knots which are vectors. For further details and more rigor, see [17, Chap. 2, this volume]. Both point-knots and vector-knots can be inserted into a B-spline curve with (3.1). In particular, Steps 2 and 3 transform a B-spline curve segment written in terms of the knots tq-n+i,..., tq+n to one over the knots (£ g _ n +i, 1 ) , . . . , (tq, 1), ( 1 , 0 ) , . . . , (1,0). This can be thought of as inserting (1, 0) as a knot with multiplicity n. Each iteration of the i loop in Step 4 then transforms the curve expressed over the knot sequence (s ? , 1 ) , . . . , (sj-+ n _i, 1), (1,0),...,(1,0) to one over (st-+1, 1),..., (si+n, 1), (1,0),.. .,(1,0). Since this is the result of inserting (s;+n, 1) in the sequence immediately prior to the first appearance of (1,0), the algorithm can be interpreted as a sequence of knot insertions using the homogeneous form of Boehm's algorithm. Divided differences: An alternative approach to Step 4 of the algorithm utilizes a divided difference representation. Let Bi^(t] for j — 0 , . . . ,n be the basis functions over (si-n, 1 ) , . . . , (sj-+ n , 1), (1, 0 ) , . . . , (1, 0) so that
These are the bases through which the knot insertion is factored. It is possible to show (see Chapter 2) that
This implies
Factored Knot Insertion
75
FIG. 3.6. The duality between forward differencing (part (a) of the figure) and factored knot insertion (part (b)). Note that one diagram can be transformed into the other by replacing blossom values by the corresponding divided differences (or vice versa), reversing the arrows, and introducing minus signs in the appropriate places.
which in turn gives the formula in Step 4 of the algorithm. Duality: The third perspective is manifested in the similarity between Figs. 3.1 and 3.2. Not surprisingly (although elegantly), there are intimate ties between factored knot insertion and evaluation by repeated nested multiplication. More specifically, the two processes are dual, as we now discuss. If we let Dij(t~),j — 0 , . . . , n, be the elements of the Newton basis over the node sequence
then the Bl^(t) and the D i j ( t ) satisfy a de Boor-Fix type identity:
This result, proved in Chapter 2, implies that if A is the matrix which transforms the basis #i-i,j to the 5;,j, then the transpose of A transforms the Di^n-j to the Di-\^n-j [1], [3]. Evaluation schemes such as fast forward differencing are sometimes presented as a sequence of steps which are transformations of a curve expressed in terms of the D i j ( t ) rather than as a sequence of nested multiplications. Such a transformation is dual to one iteration of the i loop in Step 4 of the algorithm, as shown in Fig. 3.6. Nested multiplication as presented in §3.1 is dual to one iteration of Step 4' of the alternative form of the algorithm. Step 4' is a sequence of transformations between a B-spline curve segment represented over knots to one representated over
76
Knot Insertion and Deletion Algorithms
FlG. 3.7. The duality between evaluation by nested multiplication of the Newton form of a curve (shown in part (a)) and the alternative form of factored knot insertion (part (b)).
knots (s;+7l, 1), (SJ- +TI _I, 1),..., (s;+i, 1), (1,0),..., (1,0). (Note that the knots are not nondecreasing in the first component as one might expect. Since we are working with a single segment, it is possible to define and work with Bspline curves over decreasing knots. For more details see, e.g., [17].) As was mentioned in the Introduction, evaluation by repeated nested multiplication can be thought of as a sequence of transformations between the Newton form of a ciirve dependent on the node sequence ${+i,..., 5;+n and the Newton form with node sequence s t , . . . , s;+ n _i. Let Bij(t),j — 0 , . . . , n, be the basis functions over (si+n, 1), (s,- +n _i, 1),..., (s,-+i, !),(!, 0),.. ., (1,0) such that
Let D{j(t^j
= 0 , . . . , 7 i , be the elements of the Newton basis over nodes
•Sj'+l 5 • • •, Si+nt 1-6.,
Then these two bases are related by the identity (see Chapter 2)
Figure 3.7 presents the duality between nested multiplication and one iteration of Step 4'. The link between factored knot insertion and evaluation by nested multiplication is important for both practical and theoretical reasons. Practically,
Factored Knot Insertion
77
results about evaluation by nested multiplication may be applicable to the knot insertion algorithm. A couple areas of current research where this connection may be particularly useful are running the algorithm using only integer values, and designing special hardware for fast implementation [20], [21]. Theoretically, this connection unifies aspects of two different yet important representations—B-spline form and Newton form. Thus this approach is of greater importance than the hasty exposition we have given it indicates. While we will briefly mention a few ramifications of this connection below, the reader interested in a richer explanation of B-spline dual functionals is referred to [3], [9]-[ll]. Differentiation: There are also many other possible derivations. While we will not give any more of these in detail, we will note the close relationship between the algorithm and differentiation. One can use the identity
(here the blossom in the second line of the formula is an n — j-variate function and the blossom in the final line is n-variate, but the partial derivative depends on only n — j values) to derive the algorithm [14]. Or, one can also obtain the algorithm using more traditional B-spline derivative results [22]. The knot insertion algorithm in [26] is also derived from a derivative formula. To conclude this section, we note that the basis Bij used in the algorithm is not new to computer aided geometric design. Boehm's algorithm for finding the derivatives of a B-spline curve can be thought of as a two-step process [7]. In the first step one transforms the curve segment into an intermediate form, and in the second step one transforms this intermediate form to the power basis. The basis for the intermediate form is precisely the B{j [17, Chap. 2 this volume]. 3.5.
Analysis of the Algorithm
Standard knot insertion techniques such as Boehm's knot insertion algorithm or the Oslo II algorithm (the version of the Oslo algorithm which works directly with the control vertices) often rely on convex combinations of the form aP -\- (1 — a)Q where P and Q are points and a is a real number usually of the form (a - b)/(c — d}. The algorithm in §3.2 replaces these combinations with combinations of the form P + (a — b}Q. The division is eliminated, and the number of multiplications is reduced, as are the number of additions or subtractions. Thus the strong point of factored knot insertion is its speed. However this speed is not without penalty, as the price for it is a start-up cost, decreased flexibility, and perhaps more susceptibility to build-up of numerical
78
Knot Insertion and Deletion Algorithms
error. In this section we will discuss the advantages and disadvantages of the algorithm. We begin with a more rigorous computation count for our algorithm and Boehm's knot insertion algorithm. To insert m new knots into a B-spline curve of degree n using Boehm's algorithm requires computing nm convex combinations of points. Finding a and 1 — a in each of these combinations requires 3 additions/subtractions and 1 division. If the points are in d-space, then the products aP and (1 — a)Q require 2d multiplications and an additional d additions are required to add the products. Thus there are a total of nm(3 + d) additions/subtractions, nm divisions, and 2dnm multiplications. It is possible to rewrite the convex combination as Q + <*(P — Q). If this alternative form is used, the total cost is 2nm(\-\-d] additions/subtractions, nm divisions, and dnm multiplications. Although this form reduces the number of multiplications, it increases the number of additions/subtractions (unless d = 1) and might be more prone to round-off error. The factored knot insertion algorithm possesses a start-up cost (Step 2) of n(n + 1)(1 + d)/2 subtractions and n(n + \}d/2 divisions. If all the new knots lie in the same knot interval, then each of the first m iterations of the i-loop in Step 4 consists of n(l + d) additions/subtractions and nd multiplications. The final n — 1 iterations require a total of n(n — 1)(1 + d}/2 subtractions and n(n — l)d/2 multiplications. Thus the total number is (n + ra)n(l + ef) additions/subtractions, n(n-\-\]d/2 divisions, and nd(m-\-(n— l)/2) multiplications. (Note that the total number of combinations (excluding the start-up calculations) done in this case is mn + n(n — l)/2 for our algorithm, which is n(n — l } / 2 more than for Boehm's algorithm. Thus not only is there an extra cost for start-up, but there are also extra combinations done. Techniques eliminating at least some of these extra combinations exist; one such technique is described in the next section.) As an example suppose d = 2 and n = 3. The cost for Boehm's algorithm is 15m additions/subtractions, 3m divisions, and 12771 multiplications. (The alternative form will require I8?n additions/subtractions, 3m divisions, and 6m multiplications.) On the other hand, our knot insertion algorithm requires 27 + 9m additions/subtractions, 12 divisions, and 6 + 6771 multiplications. Asymptotically in 771, our knot insertion algorithm eliminates the divisions, cuts in half the number of multiplications, and reduces by about a third the number of additions/subtractions, or, for the alternative form, eliminates the divisions and cuts in half the number of additions/subtractions. For general n and large 771 our knot insertion algorithm takes (d-\- l)/(d + 3) times as many additions/subtractions, essentially eliminates the divisions, and cuts the number of multiplications in half for the usual form of Boehm's algorithm, and halves the number of additions/subtractions and essentially eliminates the divisions in the alternative form. Thus we have shown that there are cases where our knot insertion algorithm takes fewer operations than one of the standard knot insertion techniques.
Factored Knot Insertion
79
Although this decrease is only by a constant factor, this factor may be enough that some applications which are not severely affected by the drawbacks of the technique (see below) may benefit from it. The decrease occurs most often when many knots are to be inserted, "many knots" being at least enough to overcome the initial transformation. Often this will not be a large number. In the example above, for instance, when m > 5 the number of each type of operation done in our knot insertion technique will be less than or equal to the corresponding number in the usual form of Boehm's algorithm. In general, the overall efficiency of knot insertion algorithms is a highly complex issue depending not only on such factors as the method used, the dimension of the control vertices, and the number of knots to be inserted, but also on such factors as the hardware, the implementation, and the distribution of the new knots among the knot intervals. Additional discussion of this topic is given in [2, Chap. 4, this volume]. We now discuss the disadvantages of the algorithm. The first drawback is that the algorithm takes fewer computations only in some cases, and even then the savings, while not insignificant, are only by a small factor. In some cases, for example when only a single knot is inserted, the algorithm can take substantially longer than known techniques. A second drawback is inflexibility. For example, often one may insert some knots and then want to go back and insert one or more new knots. In Boehm's algorithm this is no problem. In our algorithm the order in which the knots are inserted matters, and thus it is more difficult to go back and insert another knot. The final drawback concerns error build-up. As mentioned above, factored knot insertion is similar in some ways to evaluation by forward differencing, which can be an unstable process [18]. For these reasons, one might expect factored knot insertion to be numerically unstable. However, although factored knot insertion involves different computations than the other knot insertion techniques, each new control vertex is still a convex combination of the old control vertices. Therefore, if all the computations are done in exact arithmetic, any error in the old control vertices will not be magnified in finding the new control vertices. Stated more specifically, each new control vertex
for some al such that ^ ®i — 1 and az- > 0 for all i. Thus, if we perturb each Vi by e;, the error in Wi, if all computations are exact, will be at most max; €i . This result can be reconciled with the above observation about the link between factored knot insertion and evaluation by forward differencing by observing that knot insertion is essentially an interpolation process; the new knots are always within the current original knot interval. Evaluation by forward differencing, on the other hand, is an extrapolation process. Nonetheless, the heavy use of previously computed values, the ordering of the computations, and the fact that the result in the previous paragraph
80
Knot Insertion and Deletion Algorithms
will not be true for certain variations of the algorithm makes it possible that there are cases in which significant error build-up will occur. As one example, suppose instead of the exact values P°, perturbed values Pf + tj are used. The error tj could originate, for example, from round-off errors in Step 2 of the algorithm, or if a user works with a curve representation other than B-spline and calculates the P° from this representation. If all computations in Step 4 of the algorithm are done in exact arithmetic, then for cubic curves a new control vertex Wi = Ba[(ui+i, l),(-u i + 2,1)5(^+35 0] nas error
Because of the result bounding errors from the original control vertices, and because error build-up has not been a major problem in tests which have been run thus far [19], we do not wish to be pessimistic concerning this topic. At the same time, the remarks in the previous paragraph indicate that caution, further testing, and additional theoretical investigation are needed. There are some modifications to the basic algorithm which will lessen, but not totally eliminate, these shortcomings. Therefore we now turn our attention to variants of the algorithm. 3.6.
Variants
In §3.2 we presented our knot insertion algorithm in its most basic form; here we will discuss some possible modifications. Some of these variants may provide ways to alleviate somewhat the drawbacks mentioned above. Others are optimizations that might be useful. All extend the theory surrounding the algorithm. Because of the number of variations, we will be content only to sketch each one as the details are either cumbersome or topics of current and future research. 3.6.1. Backwards Insertions. In the original version of the algorithm we insert the new knots in ascending order. A symmetric version of the algorithm can be formulated which inserts the knots in descending order. The start-up transformation (Step 2) is identical in both cases. For the descending version Step 3 takes the values off the right side of the triangle in Fig. 3.4, rather than the left. Step 4 in this variant is analogous to Step 4 of the original algorithm. It may actually be advantageous to run the algorithm forward part of the way and backward part of the way. Since this will not introduce a new start-up, no additional computation expense is incurred. In fact, since the last n — 1 new control vertices found using either the ascending or descending version require fewer computations than the remainder of the points, there is a small reduction in the number of computations required. If in > n — 1 the cost will be reduced to the start-up cost plus mn(l + d] addition/subtractions and mnd
Factored Knot Insertion
81
multiplications. Furthermore, running the algorithm forward part way and backward part way may reduce any error build-up. This may be significant because the final few or first few new control vertices may be used in knot insertions in nearby segments. 3.6.2. The Oslo Algorithm. Thus far we have been discussing our algorithm mostly in relation to Boehm's knot insertion algorithm, since of the standard knot insertion techniques it bears the most resemblence to our algorithm. There is also a variant of our algorithm that is similar to the Oslo algorithm. The version of the Oslo algorithm we discuss here is the Oslo II algorithm, the version that works with the control vertices directly. For a variant similar to the Oslo I algorithm, which finds a coefficient matrix, see §3.6.8 on matrix versions. The Oslo II algorithm finds each new control vertex by means of a triangular array of auxiliary control vertices. One new control vertex is found per array. By using our algorithm to insert n contiguous new knots, we duplicate one step of the Oslo algorithm (see Figs. 3.8(a) and 3.8(b)). Doing so only once will not provide a computational advantage over standard knot insertion techniques. However, if many knots are to be inserted in an interval, then the start-up transformation, which only has to be done once per interval, has its cost spread among the many new control vertices. In such cases this version is less costly than the usual form of the Oslo II algorithm but is more expensive than the version of our algorithm given in §3.2. Further modification along the lines presented in [16] will make the number of computations the same. For example, if we insert the first n new knots, we actually obtain n new control vertices (these points are those on the left side of Fig. 3.8(b)). If we could then update the values on the left side of the start-up triangle (Fig. 3.8(a)), we could make repeated use of this observation. This can be done by inserting, off the left side of the start-up triangle, the next new knot followed, in reverse order, by the second through nth new knots (see Fig. 3.8(c)). The apex of the triangle in this figure is the next new control vertex; the values of the right-hand side of the triangle replace the left side of the original start-up triangle if further insertions are to be done. Further details are given in [2, Chap. 4, this volume]. Note that the Oslo version of our algorithm is a special case of the following observation: to find BQ[(U\,1),..., (ur, 1), (1, 0 ) , . . . , (1, 0)] after having done the start-up it suffices to run Step 4 of the algorithm with sn+p = up for p = 1 , . . . , r; i running from 1 to r, and j going from n — i down to n — r. 3.6.3. Reduced Start-Up. Doing the start-up calculations for a curve segment (see Fig. 3.2(a)) is an 0(n2} operation. Although this constitutes a small fraction of the total number of computations if many knots are to be inserted, it would still be advantageous to reduce it. In some cases this is possible. Suppose we do start-up calculations based on the original knots
82
Knot Insertion and Deletion Algorithms
FIG. 3.8. The Oslo variant of the algorithm. Finding the control vertex BG[CX,P, j] uses the start-up triangle (part (a) of the figure) followed by another triangle (part (b)) for inserting cv,/?, and j. A related procedure uses the calculations in parts (b) and (c) and n + 1 = 4 knots « < / ? < 7 < £ to obtain n + 1 new control vertices.
Factored Knot Insertion
83
FIG. 3.9. Possible start-up cost reduction. The start-up triangles for [ t q , tq+i] and [tq+\,tq+2) are overlaid. The unsubscnpted parenthesized quantities are common to both intervals, those subscripted with an "L" are the blossom values for the curve over [ t q , t q + i ) , and those subscripted with an "R" are the blossom values for the curve over [tq+i, £9+2)- The labels on the edges coincide for the overlaid portion of the diagrams.
and control vertices for the two intervals [tq,tq+\] and [tq+i,tq+2]- There is significant overlap in these calculations, as shown in Fig. 3.9. In particular, if we have already done the calculation for one of the intervals, we need only use 0(n] operations to do it for the other. This observation can be used, for example, to lessen the start-up cost for the Oslo variant just mentioned. It cannot be directly applied to reduce the start-up cost of the usual form of the factored knot insertion algorithm (i.e., the algorithm (and alternative form) stated in §3.2). This is because after knots are inserted in one interval, one uses the updated knot vector and control vertex list when inserting knots into the next interval. It is possible, however, to use a slight modification of the observation to decrease the start-up cost in some cases. This modification, along with the original observation, appears in the algorithm in [26]. 3.6.4. Restarting the Algorithm. Situations may arise where one wants to restart the algorithm. For example, one may have inserted a sequence of knots and want to go back and insert still other knots. Since the knots must be inserted in order, there is a need to restart the algorithm. While it is possible to run the algorithm as usual beginning with the newly found Wl, one may want to work with the original Vx. Although the knots must be inserted in order (i.e., we cannot insert T12 before rZl if ?a < i-2 and we are inserting the knots in ascending order), it is
84
Knot Insertion and Deletion Algorithms
possible to run the algorithm on any subsequence of the knots to be inserted. Thus to go back and insert some additional knots it is not necessary to redo Steps 1-3. There will be some small extra cost in Step 4 since some of the knots which were previously inserted affect the control vertices to be found and thus need to appear in the algorithm again. Note this modification allows us to begin anywhere in the sequence of knots to be inserted. 3.6.5. Rational B-Spline Curves and Tensor Product B-Spline Surfaces. Our knot insertion technique can be applied to both rational B-spline curves and tensor product B-spline surfaces in the usual way, although any speed advantage in these contexts may be less pronounced (see Chapter 4) and any error build-up may be further magnified. In the case of rational B-spline curves the knot insertion algorithm is applied separately to both the numerator and the denominator of the curve. In the case of tensor product surfaces the tensor product is decomposed into its components. See also §3.6.8 below. It is an open question whether there is a useful version of the knot insertion algorithm for rational curves which uses the rational structure of the curve. 3.6.6. Multiple Knots. Some knot insertion techniques, e.g., Boehm's algorithm, can be improved if multiple knots are to be inserted. This is true here as well. When a multiple knot is inserted, certain multiplying coefficients reduce to zero. If it is known beforehand that multiple knots are to be introduced, it may be useful to incorporate this explicitly into the algorithm. It is possible, by inserting a knot n times, to use the knot insertion algorithm to evaluate a B-spline curve. After doing the initial transformation, evaluating each new point requires fewer computations than the de Boor algorithm [8]. However, other techniques such as Homer's method or fast forward differencing are quicker still. 3.6.7. Equally Spaced Knots. Fast forward differencing is a version of evaluation by differencing and repeated nested multiplication which evaluates a polynomial curve using only additions after the initial differences are found. There exists a version of our knot insertion algorithm which is analogous in that after a (slightly longer than usual) start-up only additions are needed to find the new control vertices. We begin with an arbitrary knot sequence, but will insert knots that are equally spaced within the knot interval. We do the initial transformation, part of Step 4, multiply the current auxiliary control vertices by the appropriate factors, and then are able to proceed by additions. Figure 3.10 illustrates this process and provides some details. Again, for this operation to be efficient we need to insert many knots. Further examination of this case and a comparison with other methods merit additional attention as other knot insertion techniques can also be modified to take advantage of equally spaced knots. The case of equally spaced knots is also considered in [26].
Factored Knot Insertion
85
FIG. 3.10. Applying the algorithm to equally spaced knots. When the knots to be inserted are equally spaced, say s z - +1 — sz- = h for i — n, . . ., n + m — 1, the algorithm can be modified so that after an initial start-up phase only additions are used.
86
Knot Insertion and Deletion Algorithms
Note that the remarks in §3.5 concerning error build-up are still valid here; that is, although the error build-up of factored knot insertion merits further investigation, it ought not share the wild error propagation of fast forward differencing [18]. Adaptive forward differencing is a means of evaluating polynomial curves by fast forward differencing while allowing for a step size which either increases or decreases [20], [21], [29]. While it is possible to use the techniques of adaptive forward differencing to some extent to do adaptive knot insertion, there is a fundamental difference between the two processes, since decreasing or increasing the step size in forward differencing affects only the next point of the curve, while varying the distance between successive knots affects many new control vertices. Thus altering our algorithm to adaptively vary the interval size, while possible, is less straightforward than its counterpart in forward differencing. 3.6.8. Matrix Versions. When working in high dimensions or with tensor product surfaces, it may be preferable to use algorithms that, instead of dealing directly with control vertices, produce the matrix or matrices that transform the old control vertices into the new control vertices. There is a version of the Oslo algorithm to do this (the Oslo I algorithm [13]); there is also a variation of Boehm's knot insertion technique to do so [6]. Analogous techniques exist for our algorithm, and are discussed in more detail in [2, Chap. 4, this volume]. Although our technique may provide a savings in finding the matrices, the matrix multiplication cost is usually the same regardless of the algorithm. Since in many cases this comprises the bulk of the computations, decreasing the cost of finding the matrix or matrices, while desirable, will often be of only minor importance. 3.6.9. Special Hardware and Parallel Implementation. Boehm's knot insertion algorithm, the Oslo algorithm, and the factored knot insertion algorithm are all amenable to a parallel pipelined implementation. Such an implementation will speed up all of these algorithms. There is also specially designed hardware for some variants of forward differencing [20]. Because of the similarity between our algorithm and evaluation by differencing such hardware (or modifications of it) may be easily adaptable to our algorithm. 3.7.
Concluding Remarks
We have presented an algorithm for inserting knots into B-spline curves and surfaces. This algorithm is closely related to an algorithm in [26]; however, the derivations, analysis, and further development in this chapter and [26] differ significantly. Although our algorithm does have some drawbacks when compared to known knot insertion techniques, it requires fewer computations in some cases, and therefore may provide some practical benefits. The algorithm actually involves transforming the curve to another B-spline-like representation
Factored Knot Insertion
87
in which we can do knot insertion in fewer operations than we can in the general case. The new representation is closely related to the Newton form of a polynomial. The algorithm also possesses theoretical importance. Not only does it unify some algorithms involving B-spline curves, but it also connects aspects of these procedures to algorithms involving the Newton form of a polynomial. Furthermore the B-spline-like basis it uses may be helpful in developing other algorithms. While in this paper we have discussed many aspects of our knot insertion algorithm, there are many details left to be filled in and many topics which may be fruitful for future research. We conclude by listing the most prominent: a detailed error analysis of the algorithm and its variants an investigation of integer implementation of the algorithm further examination of the variants, in particular the one dual to fast forward differencing further exploration of the basis used in the algorithm. Acknowledgments
This work was supported in part by Natural Sciences and Engineering Research Council of Canada grant OGP0036825 and National Science Foundation grant CCR-9113239. The authors also wish to thank Rui Feng Zhu of the University of Minnesota for doing further testing of the algorithm. References [1] P. J. Barry and R. N. Goldman, What is the natural generalization of a Bezier curve? in Mathematical Methods in Computer Aided Geometric Design, T. Lyche and L. L. Schumaker, eds., Academic Press, San Diego, CA, 1989, pp. 71-85. [2] , Knot insertion algorithms, this volume, pp. 89-133. [3] P. J. Barry, A. D. DeRose, and R. N. Goldman, B-splines, Polya curves, and duality, J. Approx. Theory, 65 (1991), pp. 3-21. [4] R. H. Bartels, J. C. Beatty, and B. A. Barsky, An Introduction to Splines for Use in Computer Graphics and Geometric Modeling, Morgan Kaufmann Publishers, Palo Alto, CA, 1987. [5] W. Boehm, Inserting new knots into B-sphne curves, Comput. Aided Design, 12 (1980), pp. 199-201. [6] , On the efficiency of knot insertion algorithms, Comput. Aided Geom. Design, 2 (1985), pp. 141-143. [7] W. Boehm, G. Farin, and J. Kahmann, A survey of curve and surface methods in CAGD, Comput. Aided Geom. Design, 1 (1984), pp. 1-60. [8] C. de Boor, On calculating with B-splines, J. Approx. Theory, 6 (1972), pp. 50-62. [9] , A Practical Guide to Splines, Springer-Verlag, New York, 1978. [10] C. de Boor and G. Fix, Spline approximation by quasi-interpolants, J. Approx. Theory, 8 (1976), pp. 19-45.
88
Knot Insertion and Deletion Algorithms
[11] C. de Boor and K. Hollig, B-splines without divided differences, in Geometric Modeling: Algorithms and New Trends, G. Farin, ed., Society for Industrial and Applied Mathematics, Philadelphia, PA, 1987, pp. 21-27. [12] P. de Casteljau, Shape Mathematics and CAD, Kogan Page Ltd., London, 1986. [13] E. Cohen, T. Lyche, and R. Riesenfeld, Discrete B-splines and subdivision techniques in computer aided geometric design and computer graphics, Comput. Graphics and Image Processing, 14 (1980), pp. 87-111. [14] A. D. DeRose, Private communication, (1989). [15] J. D. Foley and A. Van Dam, Fundamentals of Interactive Computer Graphics, Addisori-Wesley, Reading, MA, 1982. [16] R. N. Goldman, Blossoming and knot insertion algorithms for B-spline curves, Comput. Aided Geom. Design, 7 (1990), pp. 69-81. [17] R. N. Goldman and P. J. Barry, Algorithms for progressive curves: Extending Bspline and blossoming techniques to the monomial, power, and Newton dual bases, this volume, pp. 11-64. [18] K. I. Kanatani, Errors of the incremental method for curves, Comput. Vision, Graphics, and Image Processing, 26 (1984), pp. 130-133. [19] A. Lee and T. T. Png, Speed and robustness of factored knot insertion scheme for B-spline curves, Tech. Report CS-90-07, Computer Science Department, University of Waterloo, Waterloo, Ontario, Canada, 1990. [20] S.-L. Lien, M. Shantz, and V. Pratt, Adaptive forward differencing for rendering curves and surfaces, Computer Graphics—Proceedings of SIGGRAPH '87, 21 (1987), pp. 111-118. [21] S.-L. Lien, M. Shantz, and R. Rocchetti, Rendering cubic curves with integer adaptive forward differencing, Computer Graphics—Proceedings of SIGGRAPH '89, 23 (1989), pp. 157-166. [22] T. Lyche, Private communication, (1990). [23] T. Lyche and K. Morken, Making the Oslo algorithm more efficient, SIAM J. Numer. Anal., 23 (1986), pp. 663-675. [24] L. Ramshaw, Blossoming: A connect-the-dots approach to splines, Tech. Report 19, Digital Equipment Research Center, Palo Alto, CA, 1987. [25] , Blossoms are polar forms, Computer Aided Geometric Design, 6 (1989), pp. 323-358. [26] P. V. Sankar, M. J. Silbermann, and L. A. Ferrari, Curve and surface generation based on a high speed derivative algorithm, submitted for publication. [27] H.-P. Seidel, Knot insertion from a blossoming point of view, Comput. Aided Geom. Design, 5 (1988), pp. 81-86. [28] , A new multiaffine approach to B-splmes, Comput. Aided Geom. Design, 6 (1989), pp. 23-32. [29] M. Shantz and S.-L. Lien, Rendering trimmed NURBS with adaptive forward differencing, Computer Graphics—Proceedings of SIGGRAPH '88, 22 (1988), pp. 189198.
CHAPTER
4
Knot Insertion Algorithms Phillip J. Barry and Ronald N. Goldman
4.1.
Introduction
Knot insertion is one of the most useful and powerful procedures for analyzing B-spline curves and surfaces. As a constructive means for subdividing curves and surfaces, it is an important practical technique. (Subdivision is useful not only in its own right but also in rendering and intersection algorithms [15]). Knot insertion is also a vital theoretical tool, sufficiently encompassing to provide elegant proofs of such disparate results as Boehm's derivative algorithm [14, Chap. 2, this volume] and the variation diminishing property of B-spline curves [16]. It can even be used as the primary approach in the development of B-spline curve theory in geometric modeling [12]. Furthermore, knot insertion is a simple tool, both computationally inexpensive and conceptually straightforward. In 1980 two basic knot insertion techniques were published: Boehm's knot insertion algorithm and the Oslo algorithm. Boehm's algorithm [5], [7] is a technique for inserting one new knot at a time into a B-spline curve. The Oslo algorithm [11] is a procedure for simultaneously inserting a sequence of knots. Both procedures have since been widely implemented; for example, the a-l modeling system developed at the University of Utah uses the Oslo algorithm. It may therefore be surprising that certain aspects of knot insertion are still not well understood. New algorithms [3, Chap. 3, this volume], [13], [25], [31] indicate that the theory of knot insertion is more involved than previously believed. Moreover, not all knot insertion issues are simple; for instance, the question "which of the two main algorithms is more efficient?" has elicited different responses [6], [23]. The thesis of this chapter is that the topic of knot insertion is richer than is generally appreciated. We will show that there are a wealth of algorithms for inserting knots, and a wealth of issues surrounding these algorithms' use. Yet, at the same time, underlying these algorithms is an elegant theoretical framework. In this chapter, therefore, we will discuss knot insertion algorithms in 89
90
Knot Insertion and Deletion Algorithms
general and their efficiency in particular. Part I is a unified presentation not only of Boehm's algorithm and the Oslo algorithm, but also of a third knot insertion technique related to Sablonniere's algorithm [14, Chap. 2, this volume], [30]. Part I additionally discusses many variants of these algorithms. Part II derives computation counts for certain of these variants. This part is primarily a guide for further exploration of the algorithms rather than a prescription for identifying which one is most efficient. The two parts, although having different contents, are intimately related. The exposition in Part I provides a foundation for the computation counts of Part II while the process of identifying the factors affecting those counts in Part II augments the discussion in Part I.
Part I: Knot Insertion Algorithms 4.2.
General Remarks
There are two main knot insertion techniques: Boehm's algorithm and the Oslo algorithm. However, even the original Oslo algorithm paper identified two versions of the Oslo algorithm. A later paper [24] presented the "improved Oslo algorithm," which made both original Oslo versions more efficient. Boehm noted [6] that there is more than one version of his algorithm. Recent works [3, Chap. 3, this volume], [13], [25], [31] have presented still other ways of doing knot insertion. Even within a single version there are various ways to arrange the computations. This profusion of versions and sub-versions is one reason why the subject of knot insertion is complicated. A second reason is that there are many different approaches to these algorithms. The original derivations in both [5] and [11] used divided differences; however, the fundamental nature of these results, the unfamiliarity of some users of B-spline curves with divided differences, and the length of the original Oslo algorithm proof have led to an abundance of alternative proofs [2], [9], [13], [18], [19], [21], [26], [27], [32]. Yet these works usually examine only one algorithm or approach at a time and thus provide only minimal insight into the relationships between the various techniques. The exposition of the knot insertion algorithms given below consists of what we consider a reasonable grouping of many versions along with a unified approach to them. This approach relies on a tool called "blossoming" which we shall review in §4.3. We then present the two basic knot insertion algorithms—Boehm's algorithm and the Oslo algorithm—along with a version of Sablonniere's algorithm (a less practical but nonetheless theoretically interesting knot insertion technique) in §§4.4-4.7. Sections 4.8-4.11 are devoted to variations of these algorithms; §4.12 contains a summary of Part I. 4.3.
Blossoming
To present the algorithms, we use a tool called "blossoming," (also known as "polar forms") which was pioneered by Ramshaw and others [10], [28],
Knot Insertion Algorithms
91
[29], [33]. Blossoming is equivalent to certain other control vertex oriented Bspline approaches (see [1], [20]), but has the advantage of labeling the control vertices propitiously. This labeling not only permits a simple exposition of many algorithms, but also builds intuition about why each algorithm works. The core idea in blossoming is to associate with any degree n polynomial G(t] a unique ri-variate polynomial BG(U\, ...,ttn) denned by the following properties: multiaffine: for all indices i and all real numbers c
symmetry: for any permutation TT of ( 1 , 2 , . . . , n}
diagonal: BQ reduces to G when evaluated on its diagonal; i.e,
We refer to BQ as the "multiaffine blossom" or just the "blossom" of G. Related to the multiaffine blossom is the "multilinear blossom," which we also denote by BQ since its arguments differ from those of the multiaffine blossom. The multilinear blossom depends on n pairs (^,w z ), and has the following properties: multilinear: for all indices i and all real numbers ci,C2
symmetry: for any permutation TT of ( 1 , 2 , . . . , n}
diagonal:
92
Knot Insertion and Deletion Algorithms
Often these properties make it easier to work with the blossom of the curve rather than the curve itself. This is true particularly because the control vertices of a curve are simply expressed in terms of the blossom. Let G be a B-spline curve segment dependent on control vertices {Vi} and knots {ui}. Then another property is dual functional:
This result has immediate benefits. It enables us to label the original control vertices of a B-spline curve in terms of its blossom. It also enables us to label, in terms of the blossom, any new control vertices which would occur upon the insertion of new knots. Combined with the multiaffine (or multilinear) and symmetry properties, it relates the new control vertices to the old control vertices by a sequence of successive linear interpolations (or combinations, in the multilinear case). In particular, in the affine case all blossom values sharing n — l arguments lie on aline with the values of the remaining argument determining the relative position of the points. There is still one item to clarify. The blossom is defined for polynomials, yet B-spline curves are piecewise polynomials. The blossoms for B-spline curves are thus also piecewise, and care must be taken to use the proper blossom segment. While there is a different blossom segment associated with each knot interval, the continuity conditions on the curve induce certain constraints on the blossom. If F and G are two curves meeting with Cr continuity at £, then
for all ^ i , . . . , ur (an analogous result holds for the multilinear blossom). One effect of this property is that the arguments at which the blossom must be evaluated to yield a given control vertex do not depend on the knot interval; i.e., all blossom segments affected by a certain control vertex yield that control vertex when evaluated at these arguments. This observation implies, in general, that we need not explicitly denote which blossom segment we are using. Care must be taken that the segment used is affected by the vertex under consideration; however, we can use any such segment. For this reason our blossom notation will not specify which segment is used. 4.4.
General Observations on the Basic Algorithms
Knot insertion is a transformation technique—given a curve in one representation, it expresses the same curve in another representation. On a lower level there are three possible perspectives. One is knot oriented. The entire process consists of inserting a sequence of knots. The primitive step in this perspective is therefore inserting a single knot. A second perspective is control vertex oriented. The output of the algorithm is, after all, the new control vertices, so the primitive step is finding a single new control vertex. A third viewpoint is
Knot Insertion Algorithms
93
segment oriented. One can view knot insertion as transformations of polynomials. In this view the primitive step is finding the representation (i.e., the set of control vertices) defining a single new segment. Another way of viewing the differences between the various algorithms is by examining the arrays of points the primitive step in each algorithm computes. As we shall see, using one step of Boehm's algorithm to insert a single knot into a degree n B-spline curve computes a linear array of n new control vertices from n + 1 old control vertices. One step of the Oslo algorithm computes a triangular array to find one new control vertex from n + 1 old control vertices. One step of Sablonniere's algorithm computes a tetrahedral array of points to find n + 1 new control vertices from n + 1 old control vertices. 4.5. Boehm's Algorithm The viewpoint of Boehm's algorithm is knot oriented; the primitive step is inserting a single knot. We follow [32] in deriving Boehm's formula from the multiaffine blossom. Suppose we wish to insert a knot tq < t < tq+\. The new control vertices are, by the dual functional property of the blossom,
for j — q — n + 1 , . . . , q. Thus if the new control vertices are denoted by Wl and the old by V^ Boehm's formula is
Figure 4.1 diagrams this process (all figures in this chapter illustrate the cubic case, unless otherwise indicated). If many knots are inserted, they are inserted one at a time, using updated knot and control vertex sequences. 4.6. Oslo Algorithm The Oslo algorithm is oriented toward the new control vertices. Of the many ways to implement this algorithm, we will present what we consider the most basic, postponing discussion of some other possibilities until later. Denote the original knots by ti and the refined knots by ux. We need to find the new control vertices
94
Knot Insertion and Deletion Algorithms
FlG. 4.1. Boehm's knot insertion algorithm. The single knot tq < t < tq+\ is inserted.
from the old control vertices
Suppose, for a cubic curve, we wish to find the new control vertex Wi. Let [tq,tq+i~) be any original knot interval containing a new knot interval whose curve segment is affected by Wi. How do we find Wi = #G(W;+I, ^-i+2-, ^'+3) from the original control vertices ^(^-2,^-1,^), Bo(tq-i, tq,tq+i), BG(tq,tq+i, tq+2), and BG(tg+i, tq+2,tq+3) defining the curve over [tq,tq+i}7 The symmetry and multiaffinity of the blossom provide one way of doing this: the first two old control vertices can be combined as in Boehm's algorithm to yield BG(tq-i,tq, w;+i), the second two BG(tq,tq+i, u;+i), and the final pair BG(tq+i,tq+2,Ui+i). From these three points taken two at a time we derive BG(tq,Ui+\,Ui+2) and BG(tq+i,Ui+i,Ui+2~), and from these we can derive the desired point BG(ui+i, 1^+2, Ui+s] as shown in Fig. 4.2. This process is the Oslo algorithm; that is, the Oslo algorithm consists of a set of triangular computations, one for each new control vertex. Each triangle begins with certain old control vertices and ends with the desired new control vertex. The triangular array for arbitrary degree n consists of n(n + l)/2 affine combinations arranged into n levels. In each level one of the new knots is introduced as an argument in the blossom. The blossom values found in each level are affine combinations of the blossom values in the previous level; the exact combination is easily determined using the symmetry and multiaffinity of the blossom. These coefficients are, in fact, the same as those for the de Boor algorithm [8] except that the domain variable t in that algorithm is replaced by one of the blossom arguments in each level of the triangle. In finding BG(ui+i,Ui+2,Ui+3) it is not necessary to use the new knots in increasing order since by the symmetry of the blossom the actual order is inconsequential. This important observation will be used frequently below. Since triangular arrays such as Fig. 4.2 occur repeatedly, we will call such arrays "Oslo triangles." A 'X'+i •••Ui+n Oslo triangle" is an Oslo triangle where the first knot in the list appears in the lowest level of the triangle, the second knot appears in the second lowest level, etc. The triangle of Fig. 4.2 is a Ui+iUi+2Ui+3 Oslo triangle.
Knot Insertion Algorithms
FlG. 4.2. The Oslo algorithm. £^;(cv,/:?, 7) is written as "afij."
95
In this and subsequent figures the point
When one or more knots appearing as arguments in the blossom representation for a new control vertex is an element of the original knot vector, there may be a choice as to which original segment of the curve to use in finding the new vertex. For example, to find a new point Bc;(tq,u,tq+-[), where tq < u < ^9+1? wc can llse the segment over the interval [£ g _i, £ q ), { t q , t q + \ } , or [tq+\, tq+2), since, by the remark at the end of §4.3, the value for Bc_;(tq, u, tq+\) coincides for each of these segments. Usually the Oslo algorithm takes the leftmost possible interval. 4.7.
Sablonniere's Algorithm
Sablonniere's algorithm adopts the third knot insertion perspective—that the basic step in knot insertion is to transform a polynomial segment in one representation to another representation. Thus the input is the n + 1 original control vertices defining the relevant curve segment, the output the n + 1 vertices for the new representation. Sablonniere's original algorithm [30] transformed a B-spline curve segment into Bezier form. This algorithm is easily modified to give B-spline to B-spline transformations (see Chapter 2). Once again we are concerned with finding new control vertices from old, although now we wish to find n + 1 new vertices from n + 1 old ones. One way to accomplish this is by calculating n -f 1 Oslo triangles. The crucial observation in Sablonniere's algorithm is that it is possible to use the symmetry
96
Knot Insertion and Deletion Algorithms
of the blossom to produce overlaps in these triangles, so that the algorithm can be represented as an interconnected tetrahedron rather than a set of T i + 1 independent triangles. We illustrate this overlap with an example. Again we work with cubics. Suppose we wish to find new control vertices Bo(ui-'2,Ui-i,Ui), Bo(ui-i,Ui,Ui+i), BG(u^Ui+i,Ui+2), ^(^+1,^+2,^+3), which define the curve over a new knot interval [1^,^+1), from the original control vertices ^(^-2,^-1,^), BG(tq-i,tq,tq+i), BG(tq,tq+i,tq+2), and #^(^+1,^+2,^+3), which define the curve over an original knot interval [tq, tg+i) containing [^-, t^+i). Find BG(ui-2, ^'-i, u^) by running a Ui-2ui-\^i Oslo triangle. To find the remaining points, first find BG(tq-i,tq,Ui+i), BG(tq,tq+i,Ui+i), and BG(tq+i,tq+2,Ui+i} from the original control vertices. The calculations now split. On one hand, we use these three new points to get BG(tq,Ui,Ui+i] and ^(^+1,^,^+1); °n the other, we use them to obtain BG(tq,Ui+i,Ui+2^ and BG(tq+i,Ui+i, ^+2)- The first of these two sets of points yields BG(ui-i, U{, w t -+i), the second yields both BG(ui,Ui+i,Ui+2) and 5(3(^+1, w;+2, Wt'+s). Thus we have run four Oslo triangles: a Ui-2Ui-\U{ triangle, a Ui+\U{U{-\ triangle, a Ui^Ui+2Ui triangle, and a ^+1^+2^+3 triangle; because of the overlap in the computations, these can be arr/anged in a tetrahedron as in Fig. 4.3. Sablonniere's algorithm for general degree can also be diagrammed as a tetrahedron. The original control vertices are placed along one edge of the base of the tetrahedron, and the new control vertices appear along the skew edge. The two faces that join at the edge containing the old vertices are the Oslo triangles used to find the first and last new control vertices for the new segment. We call the face from which the computations for the new interior control vertices veer off the "veer face"; the other face we call the "lone face." Note that the order in which the new knots appear in the lone face is inconsequential, but in the veer face they must appear in the proper order. Because one control vertex affects n + 1 different spline segments it is unnecessary to run Sablonniere's algorithm for every new segment; rather, running it for every n + 1st new segment will suffice. If fewer than n + 1 control vertices for a segment are to be found, only the part of the tetrahedron that contributes to these vertices need be computed. 4.8.
General Remarks on Variations
To make sense of the many versions we present next, we group them into three categories. First are those that reorganize the computations to eliminate needless or redundant affine combinations. Second are those that either write each affine combination in a different form, or write a sequence of affine combinations in another manner. Third are matrix techniques; these techniques explicitly find the matrix that relates the new and old control vertices.
Knot Insertion Algorithms
97
FIG. 4.3. Sablonmere's algorithm. To conserve space, the q's and i's have been left out of the knots' subscripts.
98
Knot Insertion and Deletion Algorithms
FlG. 4.4. Inefficiencies in the original Oslo algorithm. A ttq+itq+2 Oslo triangle is run to find Bc(t, tq+\, ^+2)-
4.9. Variations I: The Improved Oslo Algorithm, Versions, and Recursive Sablonniere
Goldman's
In this section we discuss the improved Oslo algorithm, Goldman's version of the Oslo algorithm, a modification of Goldman's version, and a recursive version of Sablonniere's algorithm. These algorithms reduce the number of affine combinations performed. A well-known variant of the Oslo algorithm is the "improved Oslo algorithm" [24]. It relies on the observation that it is possible to reduce the size of the Oslo triangles in certain cases. In the original Oslo algorithm each new control vertex requires n(n + l)/2 affine combinations. But suppose only one new knot is inserted. Boehm's algorithm performs only one affine combination per new control vertex. The Oslo algorithm, in this case, is doing additional computations. Figure 4.4 shows where these inefficiencies lie—some quantities are computed and then multiplied by zero, and other quantities are divided or multiplied by a factor only later to have that factor cancelled. Suppose, more generally, we wish to find a new vertex BG(tj+i,.. .,tj+r, Ui+i,..., w;+ n _ r ). Since r of the arguments consist of original knots and n — r of new knots, we need to compute only an n — r level Oslo triangle. Doing so is the improved Oslo algorithm. This procedure yields a much more efficient algorithm if the knots inserted are few or are mostly separated by old knots;
Knot Insertion Algorithms
99
if the knots are otherwise most of the triangles still have n levels and only a minor speed-up occurs. Yet another advantage of the improved Oslo algorithm, and the other improved versions mentioned below, is that they reorganize the computations so as to eliminate nonconvex combinations. Nonconvex combinations can occur, as happens in Fig. 4.4, if the original interval [ t q , t q + i ) does not contain all the new knots used in that step. Further improvements are possible. Examine Fig. 4.2 once more while supposing that M Z +I, w z -+2, and ^4.3 are the first three new knots inserted. Then we need to find not only ^'(^z'+i, ^+2, ^i+s), but also BG(tq-i,tq,Ui+i) and Bc(tq,Ui+-[,Ui+2)- However, these are precisely the points that appear on the left edge of Fig. 4.2. What has happened is that the number of computations for these points is reduced (the improved Oslo algorithm observation), and overlap occurs (as in the Sablonniere algorithm, although more completely). A similar event occurs for arbitrary .degree. Thus, not only do we need fewer operations to find certain new control vertices, in some cases we need none at all as the new control vertices are by-products of the computations done to find other new control vertices. This observation, made by Goldman in [13], leads to two other versions. The first version uses this observation along with updating. Above we assumed that ul+i, w;+2, Ui+3 were the first knots inserted. If we find all the new control vertices resulting from the insertion of a set of knots, and then update the knot and control vertex sequences before inserting other knots, we can remove this constraint. There are a variety of ways to proceed in this manner; Goldman advocates the following scheme: rather than inserting n knots at a time, as our example may suggest, insert n + 1- This allows us to find the remaining new vertices by running another Oslo triangle. For instance, our example would need points 5
100
Knot Insertion and Deletion Algorithms
FlG. 4.5. Goldman's version of the Oslo algorithm. The triangle in Fig. 4.2 is run along with the Oslo triangle shown here. The result is the insertion of ji + 1 new knots using two Oslo triangles.
Knot Insertion Algorithms
101
FlG. 4.6. Inserting only two knots using Goldman's version. The. new control vertices are found from the left and top sides of the truncated triangle of part (a) and the right side of part (b).
102
Knot Insertion and Deletion Algorithms
algorithm. The second version of Goldman's algorithm, which we will refer to as the "modified Goldman" technique, also shares some characteristics with the Oslo algorithm and Sablonniere's algorithm. Goldman's algorithm and Boehm's algorithm use the output of one step as the input to the next. There may be many intermediate steps between a new (final) control vertex and an old (original) vertex used in finding it. In some cases it is advantageous to find the new control vertices as directly as possible from the old ones. In the Oslo algorithm and Sablonniere's algorithm, each new control vertex is at most n levels of affine combinations away from the old control vertices (which is the best one can do in general). In Boehm's or Goldman's algorithm the distance will depend on the number of knots inserted, as well as the degree. The output of one step of the modified Goldman algorithm is, as in Sablonniere's algorithm, n-\-1 new (final) control vertices. Again two triangles are used, and again grouping occurs, although this time on vertices. % Let Bo(ui-n+i,..., U i ) , . . . , Bo(ui+i,..., Ui+n) be the n + 1 vertices defining the curve over a new knot interval [ui,Ui+\], and let \tq,tq+\) be the original knot interval such that [ui,Ui+i) C [tq,tq+i). Use the control vertices defining the segment over [tq,tq+i) to run a U{ • • • Ui-n+i Oslo triangle. Then run a U{+i • - -Ui+n triangle off the right side of the first triangle, as shown in Fig. 4.7. The left edge of the second triangle contains the desired control vertices. This version restricts the number of levels between the new control vertices and the old ones to at most In. Once again the basic step takes advantage of overlap and consists of a pair of Oslo triangles. An improved version of this algorithm removes any nonconvex combinations and uses the same number of combinations as Boehm's and Goldman's algorithms [4]. Some of the observation above can be applied to Sablonniere's algorithm. If some of the knots of the new segment appear in the knot sequence for the old segment, then fewer computations need be done. This observation can lead to an "improved Sablonniere algorithm," somewhat analogous to the improved Oslo algorithm. Also, a Goldman version can be obtained by running two tetrahedra on a group of 3n -f- 1 consecutive new knots all strictly within the old knot; interval. Both tetrahedra use the same original control vertices. The first tetrahedron's lone face is an Oslo triangle taking the first n knots in increasing order. The veer face is an Oslo triangle with the n + 1st through 27ith knots in the group inserted in increasing order. The second tetrahedron has a veer face with the 7i + 2nd through 1n-\- 1st knots in decreasing order, and a lone face with the 2n + 2nd through 3n+lst knots in decreasing order. Before the next step of the algorithm the old knot vector is refined by the addition of the 3n+ 1 knots, and the control vertex list is updated by replacing the control vertices used to begin the tetrahedral computations with the points (1) along the "left edge" of the lone face of the first tetrahedron, (2) along the skew edge of the first tetrahedron, (3) along the skew edge of the second tetrahedron, and (4) along the "right edge" of the lone face of the second tetrahedron.
Knot Insertion Algorithms
103
FIG. 4.7. The modified Goldman algorithm. The triangle in part (a) is run first, and furnishes the. input for the triangle in part (b). The new control vertices appear on the left side of the triangle in part (b).
104
Knot Insertion and Deletion Algorithms
The above algorithm can be run if only the middle n + 1 of the 3n + 1 knots are strictly within the old knot interval. The remaining In knots need not be new knots, but must be the knots immediately preceding and following these n + 1 knots in the refined knot vector (even so, extra computations and nonconvex combinations will occur unless an improved version is used). If these criteria are not met, then the two full tetrahedra cannot be validly run. This is because the desired new control vertices will not all affect curve segments defined over new knot intervals contained in a single old knot interval. In such cases it is possible to use partial tetrahedra to find what points we can from an interval; however, the details of this modification still need to be investigated. There is another Sablonniere version which merits attention. Consider again our original Sablonniere example from §4.7. Suppose in that example the lone face was a / u z -M z _iWj_2 Oslo triangle. Then on the second level (from the base) of that triangle we have Bo(tq, ^i-i, Ui) and Bo(tq+i, Ui-i, u{). From these points we can find not only the point Bo('u
105
Knot Insertion Algorithms
Original Control Vertices Fid. 4.8. The recursive Sablonmere algorithm for n — 1. The "2n knots, written in increasing order as 1 , . . . , 14, label the. triangle, levels in which they are used. 4.10.
Variations II: Rewritten Affine Combinations and Factored Knot Insertion
In this section we discuss two modifications that operate at a lower level than those in the previous section. The first of the two methods consists of reorganizing the arithmetic operations within each affine combination. The second involves reorganizing the operations within a sequence of affine combinations. The affine combinations in the knot insertion algorithms are of the form
There are many ways to evaluate this expression. The usual order of operations requires evaluating the coefficients (a\ — a<2)/(a3 — 04) and (1 — (ai — (1-2)I(a^ — 04)), then doing the multiplications, then adding. Doing the calculations in this manner supplies what we will call the "standard" versions of an algorithm. An alternative method is simply to write the affine combination as
which, evaluated according to the usual order of operations, reduces the number of multiplications. This form may have disadvantages (it increases the number of additions/subtractions, and could be more prone to round-off error), but is sometimes used. It is applicable to any of the algorithms discussed above. We call versions employing this form "rewritten versions." The other technique we discuss in this section is "factored" knot insertion (sometimes abbreviated here as "fki") [3, Chap. 3, this volume]. Examine
106
Knot Insertion and Deletion Algorithms
Fig. 4.2 yet again. The denominators labeling each arrow are the same for all the new control vertices found using the interval [tq,tq+i). Why not do the divisions initially to get intermediate values, then find the new control vertices from those values? Doing so requires performing the divisions only once per interval; the remainder of the operations will be multiplications, additions, and subtractions. We use the multilinear blossom to explain this further. Suppose we have a cubic segment dependent on old control vertices BG[(tq-2-> 1), (^-i, 1), (£ g , 1)], BG[(tq-i, 1), (tq, 1), (*g+1,1)], BG[(tq, 1), (t, +1 ,1), (tq+2,1)], and 5G[(t,+1,1), (£ g +2, l),(£ g +3,1)]. We use the multilinearity of the multilinear blossom to write divided differences of these points as blossom values; for example
We then
calculate
differences
of these
values,
continuing until we
find J0G[(*,-2,1), (*,-i, 1), (*„ 1)], B G [(*,-i»!)»(*?» !)»(!»°)1> Bd(^ 1), (1,0), (1,0)], and #G[(!,O), (1,0), (1,0)]. The computations again form a triangular array, which we call a "division triangle," and which appears in Fig. 4.9. Each curve segment will have an associated division triangle; however, given a division triangle for one segment, one can find the division triangle for a contiguous segment at reduced cost because of overlap of the two triangles (see Fig. 4.10). From the division triangle new control vertices can be found without division. Figure 4.11 shows the necessary steps. The combinations rely on the multilinearity of the multilinear blossom. These computations again form a triangle; we call such arrays "fki triangles." Note that the values at the base of an fki triangle are the values from one side of the division triangle. Also note the combinations are simpler than in an Oslo triangle. Using this factored Oslo algorithm may reduce the number of operations. Nonetheless, each new control vertex needs one fki triangle. As might be expected, there are ways to reduce this requirement. Given BG[(ui+r, 1),. ..,(«,-+„_!,!), (1,0),..., (1,0)] for r = 0 , . . . , n, it is easy to find BG[(ui+r+i, 1),.. .,(«,-+„, 1),(1,0),.. .,(1,0)], as Fig. 4.12 depicts. We can use this observation repeatedly, beginning with the values off the left side of the division triangle. Doing so yields the new control vertices obtainable from that interval; calculating each of these points uses n linear combinations. Figure 4.13 illustrates this process, which is discussed in detail in Chapter 3. One
Knot Insertion Algorithms
107
Fid. 4.9. A division triangle. In this and subsequent figures we write (£;, 1) as "ti}" (1,0) as "6," and BG[a,p,y] as "a/3j."
FIG. 4.10. Overlap between two consecutive division triangles. The unparenthesized values are common to both intervals, those subscripted with an "L" belong to [ t q , t q + i ) , and those with an "R" to [tq+\,tq+2).
108
Knot Insertion and Deletion Algorithms
FIG. 4.12. Finding the values BG[(ui+r+i, 1 ) , . . . , (ui+n, 1), (1, 0 ) , . . . , (1, 0)] from BG[(ui+r, 1),..., (ui+n-i, 1), (1, 0 ) , . . . , (1, 0)], r = 0 , . . . , n.
Knot Insertion Algorithms
109
FIG. 4.13. A factored version of Boehm's algorithm. The new control vertices occupy the leftmost column.
interpretation of this process is that it is a multilinear variant of Boehm's knot insertion. Although we shall therefore consider this version as the factored version of Boehm's algorithm, it does not share some of the characteristics of Boehm's algorithm. In particular, the new knots in each original knot interval must be inserted in order; also the the number of combinations done is slightly more than for Boehm's algorithm. Whether there exists another factored version that is more similar to Boehm's algorithm is an open question. A third factored version modifies Goldman's Oslo algorithm. Group the new knots within an interval into disjoint sequences of n-\-1 consecutive knots. For each sequence use the first n knots in increasing order to run one fki triangle; then use the last n in reverse order to run a second fki triangle. This computation inserts n + 1 new knots; the new control vertices appear on the left side of the first fki triangle and at the apex of the second fki triangle. To perform additional insertions in that interval, replace the left side of the division triangle by the right side of the second fki triangle. This process is shown in Fig. 4.14. Factored versions of the modified Goldman algorithm and the Sablonniere variants also exist. Once again the idea is to compute the division triangle for each interval, and then use the values on the side of the triangle in place of the
110
Knot Insertion and Deletion Algorithms
FlG. 4.14. The factored Goldman Oslo algorithm. Two fki triangles are used. The first is the same as in Fig. 4.11. The second appears here. The new control vertices are obtained from the left side of Fig. 4.11 and the apex of this figure. The values on the left side of the division triangle are replaced by the values on the right side of this figure.
Knot Insertion Algorithms
111
original control vertices. This provides factored modified Goldman, factored Sablonniere, factored Goldman Sablonniere, factored recursive Sablonniere, and factored recursive Goldman Sablonniere algorithms. The advantage of the factored versions is that by doing some common calculations as a start-up step, they decrease the number of arithmetic operations in the remainder of the computations. Often, however, this speedup is small, or is eroded by the start-up cost. This is investigated further in Part II. It is also unclear how prone to numerical error factored versions are. Some further discussion appears in Chapter 3 and in [17]. The first knot insertion algorithm we know of to use what we call a factored technique appears in [31]. 4.11.
Variations III: Matrix Algorithms
The knot insertion problem can be written in matrix form. Matrix algorithms for knot insertion find explicitly the matrix relating the new and old control vertices. In this section we explore matrix algorithms. While the basic ideas for these algorithms are simple, the topic is rich. It is not our intention to give a comprehensive treatment of this topic. Rather, this section has two themes. The first is that matrix algorithms are closely linked to the point oriented algorithms of the previous sections. The second is that just as there are many point oriented knot insertion algorithms, so there are also many algorithms for finding the knot insertion matrix. Again let {ti} be a knot vector, and {HI} a refinement of that knot vector. Let {Ni(t}} be the degree n B-splines over {^-}, V the column vector containing the old control vertices {V^}, and W the column vector containing the new control vertices {M^}. Let [tq,tq+i) be the original knot interval from which we find the new control vertex or vertices of interest. The knot insertion techniques of previous sections find W from V, although they do not explicitly find the matrix R such that W — R * V. Are there advantages to finding R explicitly? Perhaps. For example, one may have access to hardware for rapid matrix multiplication. Or one may be working in a context where the number of operations involved in finding R and multiplying R * V is competitive with the number of operations in the methods mentioned above (see Part II). There are many ways to find the matrix R. Most matrix algorithms we will consider are directly related to the point oriented algorithms discussed above. Within these matrix algorithms there are two subcategories: column oriented techniques and row oriented techniques. The underlying idea for column oriented techniques is that one column of the matrix R contains the contribution of the corresponding old control vertex to the new control vertices. These values can be found by running any of the methods of the previous sections with the relevant old control vertex replaced by the value 1 and all the other old vertices replaced by 0. While this may seem, at first glance, to be inefficient because the algorithm must be run many
112
Knot Insertion and Deletion Algorithms
times, the control vertices are replaced by scalars, and each time all but one of these are 0. Row oriented matrix algorithms begin at a new control vertex, and find the coefficients for expressing that new point in terms of the old control vertices. This is done by running the computations in the diagram for finding that point top down instead of bottom up: the values which appear at the old control vertices are their coefficients. This approach is better suited to methods where the diagram is bounded only with respect to the degree of the curve (such as the Oslo algorithm), rather than where it also depends on the number of knots inserted (as Boehm's algorithm). Nonetheless, this technique is also applicable to any of the algorithms described above. The best known matrix algorithm is the row oriented version of the Oslo algorithm, better known as the Oslo I algorithm [11]. Running a Ui+n • • -Ui+\ Oslo triangle top down yields the recurrence
The entry Rij is the value a.j,n(i). This algorithm is illustrated in Fig. 4.15, and is a generalization of the well-known Cox-de Boor-Mansfield recurrence for the B-splines [7]. Because of this and other similarities with B-splines, the entries in the matrix R are commonly called "discrete B-splines" [11], [22]. This recurrence does not rely on affine combinations as the blossom identities often do. If we wish to work with affine combinations, however, we can often use a different form of the discrete B-splines. Define
These values are the analogues of the Schoenberg-normalized B-splines; it is easy to show from (4.1) that they obey the relationship
There are other important discrete B-spline identities in addition to these recurrences. Some of these follow directly from the point methods. For example, using a column oriented approach to the Oslo algorithm supplies another recurrence for finding discrete B-splines. Still other identities arise in a less straightforward manner. We now examine a few of these.
Knot Insertion Algorithms
113
FIG. 4.15. The row oriented matrix version of the Oslo algorithm. An Oslo triangle for Wi is used to find the coefficients for writing Wi in terms of the Vj; these coefficients are found by running the triangle top down with a I at the apex.
114
Knot Insertion and Deletion Algorithms The link between discrete B-splines and blossoming is given by the formula
[25] for j = q — n , . . . , q. Since the blossom is symmetric, it also holds that
for j = q — n , . . . , q and any permutation TT of n objects. This can then be applied to the recurrence (4.1) to get
Then a j j n ( i ) = fij, n (i). This result then yields a variety of discrete B-spline relationships. For example, we can obtain the following identities, the first two of which appear in [25], the third of which appears in [22], and the last of which follows from the third and the recurrence (4.1).
These and other discrete B-spline relationships are useful here for two reasons. First, in this chapter we have approached knot insertion originally through a point oriented perspective, and in this section have shown this point perspective also provides algorithms for computing the knot insertion matrix. We could have originally derived the necessary discrete B-spline formulas, used
Knot Insertion Algorithms
115
them to produce matrix methods, and from these obtained the point oriented algorithms. Second, discrete B-spline formulas may indicate still other knot insertion algorithms (both matrix and point oriented) or new variants of known algorithms. In fact, the relationships mentioned above suggest a variety of ways to compute the knot insertion matrix. It is impossible in this chapter to consider every knot insertion algorithm, so we refrain here from a detailed discussion of algorithms arising from these formulas; however, as an example of one such algorithm, we briefly mention an algorithm originally in [25]. From (4.5) it follows that
Thus, following finding any necessary starting values, one can fill the knot insertion matrix top to bottom and left to right. This algorithm is discussed further in [25] and below in Part II. Although we have briefly discussed the close relationship between the point oriented algorithms and the matrix algorithms, and between blossoming and discrete B-splines, many details of these relationships are unknown and therefore merit further investigation. In particular, how economically the matrix algorithms that arise from the point oriented approaches can be derived from discrete B-spline identities, whether the identities lead to still other knot insertion algorithms or new variants of known algorithms, and whether there exist other, useful, currently unknown, discrete B-spline identities are open questions. Hybrid methods combining point and matrix techniques are also possible. In fact it is possible to mix and match with great abandon most of the techniques described above. It some cases it may actually be advantageous to do so. We will not expand on this observation other than to note that it may be natural to use hybrid methods in techniques where one step is common in finding many points. For example, in a factored version of a row oriented matrix technique the division triangle for a segment affects every new vertex found using that segment. Rather than doing the calculations in the division triangle once per new control vertex, they may be done only once and the values on the side of the triangle used in place of the original control vertices.
116 4.12.
Knot Insertion and Deletion Algorithms Summary of Part I
Using blossoming, we have presented many ways of inserting knots. Although the basic techniques are not difficult, the many versions may be confusing. At the heart of the profusion of versions are a few basic ideas: there are a few fundamental ways of doing knot insertion (Boehm, Oslo, Sablonniere), but many versions of the resulting algorithms; these versions are the result of eliminating unnecessary or redundant computations (improved Oslo and other improved versions, Goldman, modified Goldman, recursive Sablonniere), restructuring the calculations within an algorithm (rewritten, factored), and finding the transformation matrix explicitly rather than working directly with the control vertices (row and column oriented matrix approaches). The various modifications are neither hierarchical nor independent, and the resulting versions cannot be categorized simply. Some modifications apply only to one type of algorithm (e.g., the recursive technique we discussed seems applicable only to Sablonniere), others to many (e.g., many algorithms have factored versions). Some modifications can be mixed (e.g., there is a row oriented matrix version of the factored recursive Goldman variant of Sablonniere's algorithm), and some are mutually exclusive (e.g., factored and rewritten). Because we wished to provide a broad view of knot insertion, we neither restricted ourselves to the most commonly used methods nor refrained from including methods that may never be used. At the same time, the complexity of the topic prohibited discussing all possible versions. There are undoubtedly still other methods for doing knot insertion or other versions of existing algorithms. It is also possible that some methods mentioned here but not commonly used may prove more useful than the amount of attention paid them may indicate. The discussion in this part is by no means definitive. It does, however, provide a unified view of many knot insertion algorithms. We hope this presentation aids in current understanding and future investigations of this topic.
Part II: Computation Counts 4.13.
Motivation
In this part we will derive computation counts for some of the algorithms discussed in Part I. These counts are theoretical estimates, rather than definitive measures of an algorithm's speed. The goal here is not to determine the fastest way to insert knots. Rather our intent is twofold: First, to obtain a better understanding of knot insertion algorithms by identifying factors affecting their speed as well as some issues in their implementation. Second, to provide some general discussion of the efficiency of the newer techniques such as factored knot insertion. To this end, we do not provide computation counts
Knot Insertion Algorithms
117
for every version mentioned above, which, if nothing else, would be an exercise in endurance. Rather we provide computation counts for certain methods in certain contexts in order to illustrate a few points about the algorithms and their efficiency. The first section in this part, §4.14, provides background on computation counts and related considerations. Section 4.15 sets notation. In the next three sections we derive computation counts: in §4.16 for curves using point techniques, in §4.17 for curves using some matrix techniques, and in §4.18 for surfaces. Section 4.19 contains concluding remarks for Part II. 4.14.
Background and Other Considerations
In the following sections, we will derive computation counts for some versions of the knot insertion algorithms. It is important not to see these counts as statements that one algorithm is faster or better than another. The reasons for this are many: 1. The speed of an algorithm is merely one criterion by which an algorithm is evaluated. Other criteria include, but are not limited to, numerical stability, memory costs, simplicity, amenability to implementation as a parallel algorithm or on high performance or specially designed hardware, and the need to interface with other software. 2. Often the speed of a program which contains the algorithm is not a major concern. Even if it is, the bottlenecks in the program may not involve the algorithm. 3. The counts given below are estimates. We will neglect such matters as the bookkeeping the algorithms must do to function optimally (which in some cases may be substantial). We will also make some simplifying assumptions concerning the number and location of the new knots. For these reasons the implementations given below may not actually be the most practical and thus our analysis should be considered more as theoretical rather than applied. 4. The efficiency of the algorithm depends on how it is implemented. Part of the question of implementation was taken up in the discussion of the various versions of the algorithm: for example an affine combination of two numbers can be implemented using either one or two multiplications. There are other subtler observations which may improve the speed of the algorithms. We will mention a few below, but because of the complexity they often introduce, we will not go into them in detail. Because of the complexity of the subject there may well be more efficient implementations (or perhaps even more efficient methods of inserting knots) than those given below. There may also be implementations that are not currently useful but which may become practical on future hardware.
118
Knot Insertion and Deletion Algorithms
5. The count will depend on exactly what we count. It is possible to count affine combinations, floating point operations, long and short operations, etc. Some simplifying assumptions will have to be made. We will not count integer operations involved in finding, e.g., index values, nor attend to questions such as how long it takes to access elements of an array. These certainly contribute to the time the algorithm takes, but are too complicated and variable. What we will count is additions or subtractions, multiplications, and divisions. In some cases we will derive quantities such as affine combinations in finding the number of additions, subtractions, multiplications, and divisions; however, we will not do so in general since not all the algorithms involve affine combinations. The amount of time which additions, subtractions, multiplications, and divisions take is highly hardware dependent. As a general rule additions and subtractions are cheaper than multiplications which in turn are cheaper than divisions, although on some hardware there is little or no difference. Further, some hardware can do certain operations in parallel or at faster speeds. Thus on these, while the same number of operations are usually done, the actual time they take is less. In some specific contexts, therefore, counting other quantities is more appropriate. However, counting additions or subtractions, multiplications, and divisions allows us to work in a general context, identifies the major factors influencing each algorithm's speed, and is specific enough that other counts (such as the number of affine combinations) are often derived in the discussion as well. 4.15.
Notation
We will use the following notation: n — the degree of a spline curve nu — the degree, in the u-direction, of a tensor product surface nv — the degree, in the v-direction, of a tensor product surface TO — the number of knots inserted in a spline curve rrij — the number of knots inserted in the interval [tj,tj+i) f^u — the number of knots inserted in the u-direction of a tensor product surface mv — the number of knots inserted in the -y-direction of a tensor product surface p — the original number of control vertices
Knot Insertion Algorithms
119
pu — the original number of control vertices in the it-direction of a surface pv — the original number of control vertices in the v-direction of a surface r — the number of new final control vertices that must be found ru — the number of new control vertices to be found per knotline in the it-direction of a surface rv — the number of new control vertices to be found per knotline in the v-direction of a surface d — the dimension of the control vertices Some computation counts will depend on the number (r) of new control vertices to be found, some on the number (m) of knots inserted. Neither of these quantities is a simple function of the other, nor of the total number of the original control vertices (p), nor of the total number of the final control vertices (p-\- in if all the new knots are in the domain of the spline curve). If all the new knots are in widely separated knot intervals, r = rrm, while if all are in the same knot interval, r = m + n — 1. In general, however, m < r < p + m. 4.16.
Point Techniques for Curves
Deriving computation counts for point techniques (i.e., techniques working directly with the control vertices) is largely a matter of counting affine or linear combinations. For that reason the counts are often easily derived. We provide combination counts for all the point versions mentioned in Part I of the chapter, and afterwards discuss standard, rewritten, and factored versions. Boehm: Inserting one knot requires n combinations; inserting m knots requires mn combinations. Oslo: To find one new control vertex involves calculating one Oslo triangle, thus n(n + l)/2 combinations. To find the r new control vertices therefore requires a total of rn(n + l)/2 combinations. Using the improved Oslo algorithm will result in smaller triangles for at least some new control vertices. The rn(n + l)/2 combinations therefore is an upper bound for the improved Oslo algorithm. If many knots are inserted per interval, the upper bound will be approached, while if, e.g., only one knot is inserted per interval, the upper bound will significantly overestimate the number of computations. Goldman Oslo: Goldman's version uses the same number of combinations as Boehm's algorithm: mn [13]. Modified Goldman: This version requires two triangles for each set of n + 1 consecutive new control vertices. So each such set requires n(n + 1 ) combinations. Assuming n + 1 p + m, this method uses at most 71(7*,+ l)(p + m)l(n-\-1) = (p-\- m)n combinations (if this assumption is not true the actual number of computations may increase slightly). An "improved" version of the modified Goldman algorithm reduces its cost to inn combinations [4].
120
Knot Insertion and Deletion Algorithms
Sablonniere: One Sablonniere tetrahedron takes n(n -\- l)(ri + 5)/6 affine combinations [14]. Subject to remarks similar to those in the previous paragraph, we obtain an estimate of n(n + 5)(p + ra)/6 combinations. Recursive Sablonniere: It can be proved by induction that a recursive Sablonniere tetrahedron for a degree n = '2N — 1 curve requires 3 • 2 2jV ~ 1 — (N + 3)2 yv ~ 1 = 3(w + l) 2 /2 - (n + 1)(3 + Iog 2 (?i + l))/2 combinations. We therefore get an estimate of (p + m}('3n/'2 — Iog2(?i + l)/2) combinations. As with the two previous cases, some strategies or circumstances may alter this estimate. Furthermore, if n is not one less than a power of 2, the number of computations will increase. Goldman Sablonniere: This technique uses two Sablonniere tetrahedra to insert 3n + 1 knots. Assuming the number of knots inserted in each interval is a multiple of 3?i + 1, we get a total of ^qh[2(7i(n + \)(n + 5)/6) = mn(n + !)(?& -f 5)/(9n -f 3) combinations. If this assumption is not true, the number of calculations may increase. Recursive Goldman Sablonniere: Let the curve be of degree n = '2N. Assume the number of knots to be inserted in each original knot interval is a multiple of 3n + 1. For each set of 3n + 1 knots two tetrahedra are required. Each one has one lone face, for a total of n(n +1) combinations, and requires n combinations for the first level of the veer face, for an additional 2n combinations. The 2^ points just found on each veer face are used to start recursive tetrahedra which cost 3'22N~l-(N + 3)2N-1 = 37& 2 /2-n(3+log 2 7i)/2 combinations each. The total is therefore mn(4n - Iog 2 7i)/(3n + !)• As with the Goldman Sablonniere and recursive Sablonniere algorithm, the actual count may be more. Again we emphasize the counts above are estimates. In some cases the counts will actually be lower (e.g., using the improved Oslo algorithm rather than the original Oslo algorithm will decrease the computation count as mentioned above); in some cases it will be higher (e.g., if n is not a power of 2 the recursive Goldman Sablonniere algorithm will require more combinations). In many cases the precise count has not been studied in detail. Even though the counts are estimates, some details emerge. The algorithms are (roughly) either linear or quadratic in n. Although Boehm and Goldman Oslo have the lowest count, in some cases other algorithms may not require many more operations. The details of these algorithms may merit further investigation. The original Oslo, worst case improved Oslo, Sablonniere, and Goldman Sablonniere are all quadratic. The other algorithms, including the recursive Sablonniere and recursive Goldman Sablonniere, are linear. The algorithms above have standard, rewritten, and factored versions. Let A/S stand for an addition or subtraction, M for a multiplication, and D for a division. A standard affine combination in the algorithm is of the form
Knot Insertion Algorithms
121
and takes (3 + d) A/5, Id M, and 1 D. Rewriting the affine combination as
and evaluating takes a total of 2(1 + d} A/5, d M, and I D. A factored combination takes (1 + d) A/5, and d M. Using the rewritten form rather than the standard form reduces the multiplication by d but increases the additions/subtractions by d — 1 per combination. A factored combination has d fewer multiplications, 2 fewer additions/subtractions and none of the divisions of the standard form. It has the same number of multiplications, half the additions/subtractions and none of the divisions of the rewritten form. This count, of course, neglects the startup cost for factored versions, which is n(n-\-1)(1 + cQ/2 A/S -\-n(n-\- l)d/2 D for each original knot interval involved. In some algorithms if the start-up has been computed for a contiguous interval the cost is reduced to n(l -\-d] A/5 + nd D. For some factored versions there are also additional costs. For example, the algorithm described in Chapter 3 is a factored version of Boehm's algorithm. However, it uses slightly more than mn combinations, although the cost can sometimes be reduced to mn. We use the term "start-up cost" below to refer not only to the cost of the division triangles, but also to other (generally small) additional costs associated with factored versions. Table 4.1 lists the computation counts for the point methods. Table 4.2 shows the computation counts for the special case of a cubic curve in R3. Counts for the recursive Goldman Sablonniere method are not given in Table 4.2; for cubic curves the recursive Goldman Sablonniere method is the same as the Goldman Sablonniere method. 4.17.
Matrix Methods
There are more matrix methods than point methods and there are more possible ways to implement them. We have chosen five versions which illustrate some of the issues involved: column oriented Boehm, row oriented Oslo, column oriented Goldman Oslo, row oriented Goldman Oslo, and the algorithm following from the discrete B-spline identity (4.8) due to M0rken. Column oriented Boehm: In order to find the column corresponding to the old control vertex Vj, we can use Boehm's algorithm with control vertices Vi — 8ij. We need not insert all the new knots, but only those that, when inserted, use Vj in obtaining the new vertices, namely those in the interval [tj,tJ+n+\). Each new knot is, therefore, used to find 71+ 1 columns, so to find the matrix requires m(n + \)n combinations of scalars. This number, however, can be reduced because many of these combinations are combinations of zeros. If we insert the knots in a judicious order, we
122
Knot Insertion and Deletion Algorithms
TABLE 4.1 Estimates for point methods for curves. Standard version Boehm:
Rewritten version
factored version
+start-up Oslo: +start-up Goldman Oslo: +start-up modified Goldman:
Sablonniere:
recursive Sablonniere:
Goldman Sablonniere:
recursive Goldman Sablonniere:
+start-up
+start-up
+start-up
+start-up
+start-up
Knot Insertion Algorithms
123
TABLE 4.2 Estimates for point methods for cubic space curves.
Method Boehm
Standard version
Rewritten version
factored version
Oslo +start-up Goldman Oslo +start-up modified Goldman +start-up Sablonniere +start-up recursive Sablonniere +start-up Goldman Sablonniere +start-up
124
Knot Insertion and Deletion Algorithms
can take advantage of this simplification. One way to do so is to insert the knots in the following order: first insert, in decreasing order, the knots in the interval [tj,tj+\). This takes rrij combinations because of the zeros. Then insert, now in increasing order, the knots in [tj+\,tj+-2)- This takes mj+\n combinations since only one control vertex used to start these computations need be zero. Next insert in increasing order the knots in [tj^^j+s)- This takes raj+2(ft— 1) combinations since the last two control vertices used to start these computations are zero. Continue in this manner, inserting in increasing order the knots in the interval [tj+f!,tj+^+-i), and using m3+^(n — k + 1) combinations since the last k control vertices used to start the computations are zero. Summing over all j (i.e., all old control vertices) yields a total of ra(l + f t + ( f t — !) + • • • + 1) = m(ri 2 + ft + 2)/2 combinations to find the matrix. Note this strategy does not extend in a straightforward manner to a factored version since a nonzero value or values at the base of the division triangle will affect more than one value on the side of the division triangle. It is possible to find the points of the side of the division triangle first, and then use a column oriented method to find the matrix which transforms these points to the final control vertices. However, this will complicate the algorithm. Once the transformation matrix is found we need to multiply the old control vertices by it. The matrix has, at worst, n -\- 1 nonzero entries per row, so for each of the r new control vertices in Rd we have dn A/S and d(n + 1)M. The total estimate for (standard) column oriented Boehm is therefore 2m(n2 + n + 2) + dnr A/S, m(n2 -\-n-\- 2) + d(n + l)r M, and m(n 2 + n + 2)/2 D. Row oriented Oslo: For each new control vertex we need to find the corresponding row in the matrix. This is found by placing a 1 at the apex of the associated Oslo triangle, and computing down. The coefficients in the matrix will occupy the positions at the base of the triangle which are normally occupied by the original control vertices. Suppose we have the values ji-ij j = 0 , . . ., i — I in the i — 1st level from the top of the triangle. The values on the next level down are given by
for the appropriate constants 7. One way to compute these values is by (i) finding the 7's, (ii) finding 7 ? - ) j/^_ 1 j for all j, and (iii) adding 7;j_i/3;_i ) j_i,—7i,jA'-i,J5 and A-i,j- The cost of each of these for level i is (i) 2i A/S and i D, (ii) i M, and (iii) 2(i + 1) A/S, which results in a total of 2ft(n + 2) A/S, n(n + l)/2 M, and n(n + l)/2 D per new control vertex. The matrix multiplication is the same as for Boehm's algorithm. The total count is therefore 2n(n + 2)r + dnr A/S, n(n + l)r/2 + d(n+l)r M, and n(n+l)r/2 D. If the improved Oslo algorithm is used, both the cost of finding the matrix and the cost of multiplying it with the control vertices will be reduced. Column oriented Goldman Oslo: To insert n + 1 knots, we must run two Oslo triangles. For each control vertex at the base of the two triangles, we run
Knot Insertion Algorithms
125
the triangles with that control vertex set to 1 and all others set to 0. Thus we run the two triangles n + 1 times each. This must be clone for all mj(n + 1) sets of knots. The number of combinations this induces can be reduced if we disregard combinations of zeros. Note that, for a given triangle run n-\- 1 times as just described, the point at the apex will be computed all n + 1 times, the two points on the first level from the top n times each, and, in general, the i points on the zth level from the top n + 1 — i times each. Thus each triangle yields Y^i=i i(n + 2 — i) = n(n + I)(TI + 5)/6 combinations. Note, however, that some of the coefficients in these combinations are shared. More specifically, each of these combinations is of the form ^P + ( 1 ~~ l}Q- Each of the 7's and (1 - 7)'s is used many times, so it suffices to compute these quantities once per triangle. Doing so uses 871(71+ l)/2 A/S and 71(7?, + l ) / 2 D per triangle. Then each of the n(n + \}(n + 5)/6 combinations requires 2 multiplications and 1 addition. Since these computations must be done for each set of n + 1 new knots, either we may compute matrix products to find the total transformation matrix R explicitly and then find R * V, or we may update the control vertices after the insertion of each set of n + 1 knots. The nontrivial part of each matrix arising from the insertion of n + 1 knots constitutes a submatrix with "2(n-\- 1) rows and n + 1 columns; because the algorithm pulls points from the sides of the triangles, the (sub)submatrix consisting of the top n + 1 rows is lower triangular, and the (sub)submatrix formed by the remaining rows is upper triangular. Multiplying such a 1(n-\-1) X (n-\-1) matrix by the control vertices takes roughly dn(n-\-1) A/S and d(n+ \}(n + '2) M. The alternative approach (first multiplying all the matrices together) seems less desirable since one would still have to do the final multiplication with the control vertices. The total count for the standard version of this algorithm is therefore mn(n + 14)/3 + dmn A/S, '^mn(n + 5) + dm(n + 2) M, and mn D. Row oriented Goldman Oslo: To run the row oriented version of Goldman's algorithm, we need to run (top-down) one triangle for each of the 'In new vertices resulting from the insertion of n + 1 new knots. Thus each set of n + 1 knots requires running 2 triangles of k levels for A; = \ ...,n. Note that each of these triangles with k levels is a subtriangle of one of the two triangles of n levels. Just as in the case of the column oriented Goldman Oslo, it suffices to find the coefficients for the combinations only once. More specifically, using the notation from the discussion above on row oriented Oslo, it suffices to find the 7^ for the triangles of n levels. This uses n(n + 1 ) A/S and n(n + l)/2 D per triangle of n levels. Referring to the discussion of the row oriented Oslo algorithm, the remainder of the steps take k ( k - \ - f y A/S and k(k + l ) / 2 M for each triangle of k levels, so the total count for this part is "2n(n + \}(n + 8)/3 A/S, n(n + l ) ( n + 2)/3 M, and n(n + 1) D for each of the J7i/(n + 1 ) sets of 71 + 1 knots. The matrix multiplication is the same as for column oriented Goldman Oslo. The total count is thus '2mn(n + 8)/3 + dmn A/S, mn(n + 2)/3 +
126
Knot Insertion and Deletion Algorithms TABLE 4.3 Estimates for matrix methods for curves.
standard column oriented Boehm
row oriented Oslo
standard column oriented Goldman Oslo
row oriented Goldman Oslo
M0rken
dm(n + 2) M, and mn D. M0rken's algorithm: The algorithm arising from (4.8) differs from the other algorithms in this section because it is not obtained from a point algorithm. M0rken's algorithm and similar algorithms arising from discrete B-spline identities have not been extensively studied. For these reasons, there may be more efficient algorithms of this type or better implementations than mentioned here. To find the knots insertion matrix using M0rken's algorithm requires using (4.8) for each of the at most n+l nonzero entries in the r nontrivial rows. Each use involves 8 A/S, 4 M and 2 £>,fora total of 8(n+l)r A/S, 4(n+l)r M, and 2(n + l)r D. The matrix multiplication is the same as for Boehm's algorithm, so the total cost is 8(n+l)r + dnr A/S, 4(n+l)r + d(n+I)r M, and 2(n+l)r D. The final counts appear in Table 4.3. Table 4.4 shows counts for the case of cubic space curves. Note that the counts in Tables 4.3 and 4.4 are closer than those in Tables 4.1 and 4.2, and may be closer still if our counts were more exact. In particular, note that the count for the Oslo algorithm is now quite similar to the other counts. Although the counts within Tables 4.3 and 4.4 differ from algorithm to algorithm, it is hard to draw any firm conclusions from them because of their approximate nature and the different variables involved. For instance, the estimates for the column oriented algorithms will differ if rewritten or fki versions are used (see as well the remarks above on fki column oriented algorithms). Also, the estimates for the Oslo algorithm and M0rken' s algorithm involve the number of new control vertices to be found rather than
Knot Insertion Algorithms
127
TABLE 4.4 Estimates for matrix methods for cubic space curves.
standard column oriented Boehm row oriented Oslo standard column oriented Goldman Oslo row oriented Goldman Oslo M0rken
the number of knots inserted. While the former number is at least equal to the latter, the cases where it is much greater are those cases where improved versions of these algorithms are most effective [24]. Furthermore, the estimates for the Goldman Oslo versions assume that it is possible to group, over each original knot interval, the new knots into disjoint sets of n+1 consecutive knots. If this assumption is false the counts may increase. Making this assumption also helps identify rows having fewer than n + 1 nonzero entries. All the counts save for M0rken's algorithm are quadratic in n, although the lead coefficients for these quadratic algorithms do not involve d. M0rken's algorithm is linear in n, but its linear term possesses a large coefficient. Additional theoretical scrutiny and empirical testing may be useful if further results are desired. 4.18.
Surfaces
A tensor product surface
where Bl are degree nu B-splines, Cj are degree nv B-splines, and the VZ] form a Pu X Pv matrix of rf-dimensional points, can be written as either
or
128
Knot Insertion and Deletion Algorithms
Which of these decompositions is chosen will affect the computation count. Because of this additional measure of complexity we shall give counts for only four algorithms: point oriented Boehm, column oriented Boehm, row oriented Oslo, and row oriented Goldman Oslo. Point oriented Boehm: If we insert in the v-direction first, we do pumvnv affine (or factored) combinations. Then inserting in the ^-direction requires (pv -f rnv}7nunu combinations. Inserting first in the u- and then in the vdirection yields pvmunu + (pu + mu)mvnv combinations. Thus the point version of Boehm's algorithm for surfaces requires pumvnv + pvmunu + mumvmin(nu,nv) combinations. Column oriented Boehm: Finding the respective matrices for the u- and v-directions is the same as in the case of curves. What differs here, then, is the matrix multiplication. If we first multiply the transformation matrix for the w-direction by the matrix containing the old control vertices, we get a total of d[pvrunu] A/S and d[pvru(nu + 1)] M. Then multiplying the result by the ^-transformation matrix uses another d[(pu + mu)rvnv] A/S and d[(pu + fnu)rv(nv + 1)] M. Switching u and v in these formulas provides the count for the alternative ordering. It is possible that one ordering supplies fewer additions or subtractions but more multiplications than the other; for this reason we give both totals, which are 2mu(n^ + nu + 2) + 2mv(nl + nv + 2) + d[purvnv + pvrunu + murvnv [respectively, mvrunu}} A/S, mu(nl + nu + 2) + mv(nl + nv + 2) + d\purv(nv + l)+pvru(nu + 1) + murv(nv + Irrespectively,ra v r u (n u + 1)]]M, and \mu(n\ + nu + 2) + \mv(n2v + nv + 2) D. Row oriented Oslo: Finding the respective matrices for the u- and vdirections is the same as in the case of curves. The matrix multiplication is the same as for column oriented Boehm. The total counts are therefore 2nu(nu + 2)ru + 1nv(nv + 2)rv + d[purvnv + pvrunu + murvnv [respectively, mvrunu]] A/S, \nu(nu + l)ru-\- \nv(nv + \)rv + d\purv(nv + l)+pvru(nu + 1) + murv(nv + l) [respectively, mvru(nu-\-1)]] M, ^nu(nu + l)ru + \nv(nv + l)rv D. Again the improved Oslo algorithm will take fewer operations. Row oriented Goldman Oslo: Finding the transformation matrix for inserting nu -\- 1 new knots in the w-direction, or nv + 1 new knots in the vdirection is done as in §4.17. If we multiply the transformation matrices by the control vertex matrix as in §4.17, we get dpvmunu A/S and dpvmu(nu -\- 2) M if we insert first in the u direction, and an additional d(pu -f mu)mv7iv A/S and d(pu + mu)mv(nv + 2) M for the v-direction. Switching u and v in these formulas provides the count for the the alternative ordering. The total count is therefore ^[munu(nu -f 8) + mvnv(nv + 8)] + d\pumvnv + pvmunu + mumv min(n ii , 7iv)} A/S, \[munu(nu + 2) + mvnv(nv + 2)] + d[pumv(nv + 2) + pvmu(nu + 2) + mumv mm((nu + 2), (nv + 2))] M, (munu + mvnv] D. Table 4.5 shows the final counts. Table 4.6 shows the counts for the case of bicubic surfaces in R3. The remarks made in the previous section about Tables 4.3 and 4.4 are relevant here as well.
Knot Insertion Algorithms
129
TABLE 4.5 Estimates for surfaces. standard point oriented Boehm standard column oriented Boehm
row oriented Oslo
row oriented Goldman Oslo
TABLE 4.6 Estimates for bicubic surfaces. standard point oriented Boehm standard column oriented Boehm row oriented Oslo row oriented Goldman Oslo
130 4.19.
Knot Insertion and Deletion Algorithms Summary of Part II
The computation counts provided above are only estimates, and are not given for every possible algorithm. Yet some principles do emerge: 1. The speed of a knot insertion algorithm is a complicated function of many parameters. In particular, it depends not only on the algorithm, hardware, and implementation, but also on such factors as the degree[s] of the curve [surface], the dimension of the control vertices, the number of new knots inserted, the relative location of the new knots with respect to each other and the old knots, the number of new and/or old control vertices, and the tensor product decomposition used (for surfaces). Special cases (not discussed here) such as uniform knot spacing or multiple knots can also affect the counts. 2. The counts above are difficult to compare in detail. One reason for this is they are estimates; another is they are sometimes given in terms of different parameters (specifically, r and m}. To a large extent, these problems arise because we neglected to consider the distribution of the new knots among the original knot intervals. While this simplification avoids more complicated counts, it may well exaggerate differences in the counts. For example, certain of the algorithms with higher counts depend on r, while some of the algorithms with lower counts depend on m. Remember r > m. However, when the new knots are not widely distributed among the original knot intervals (i.e., in most of the original knot intervals in which knots are inserted, many knots are inserted), r and m will differ by a relatively small amount. On the other hand, if the knots are widely distributed, then improved versions of the algorithms can be used. In this latter case, the savings can be substantial [23], [24]. Also, in some of the algorithms, particularly some of the algorithms with higher counts, there are further optimizations possible. For example, when discussing the Oslo algorithm (or Sablonniere's algorithm) we did not use the fact that if many new control vertices can be found from the same interval, the resulting Oslo triangles share their denominators (a similar observation occurred in the discussion of the Goldman Oslo variants in §4.17). Thus reusing these denominators may result in significantly fewer computations if many new vertices are found from the same interval. 3. There are significant differences in the cost of point algorithms for curves. Some algorithms are linear in n, some quadratic. Boehm's algorithm in this case is simple and efficient. 4. The matrix methods applied to curves in §4.17 had fewer differences, as did all the point and matrix methods applied to surfaces in §4.18. Because of the approximate nature of our counts, it is unclear if any one of these methods has a distinct advantage over the others when viewed solely from the perspective of the number of calculations required.
Knot Insertion Algorithms
131
5. Whether point or matrix methods are faster is debatable, at least in the context of tensor product surfaces (see also [6], [23]). Note that the number of additions or multiplications involved in the matrix-control vertex multiplication is roughly d times the number of combinations in the more efficient point methods. Thus a large part of the comparison depends on how much of the cost of the matrix method is due to finding the transformation matrix, and how quickly the combinations in a point method can be evaluated (including start-up costs). For curves, as Tables 4.1-4.4 indicate, the faster point methods are generally more efficient than matrix methods. For surfaces the cost of finding the transformation matrix is a smaller proportion of the total cost of the matrix method because of the tensor product nature of the surfaces. The counts in Tables 4.5 and 4.6 do not seem to us to indicate a clear advantage or disadvantage for any of the methods considered, particularly when the relevant remarks at the end of §4.17 are kept in mind. All these remarks need to be considered in context. That is, our counts are theoretical estimates, and we have not considered other pertinent issues such as simplicity, flexibility, susceptibility to round-off error, and adaptability for special applications. However, two themes emerge. One is that many of the newer algorithms, such as fki algorithms or Goldman versions, are comparable in efficiency to currently popular knot insertion techniques, and may therefore merit further attention. The other is that, in general, the counts for the algorithms appear more similar than different. The algorithms are indeed different; however, in view of the remarks in this section one is tempted to ask whether the counts for many of the algorithms would be essentially the same if all possible optimizations were made and our simplifying assumptions removed. 4.20.
Concluding Remarks
In this chapter we have provided a unified presentation of a variety of techniques for inserting knots into a B-spline curve or surface. This presentation makes the general topic of knot insertion more intelligible; it also draws attention to some lesser known or newer techniques that may be promising in their own right or lead to other interesting future methods. We have also derived computation counts for some of the techniques presented. While these counts are estimates rather than definitive measures of the algorithms' speeds, they give insight into the algorithms, highlight certain implementation issues, and identify factors which affect speed. The presentation also suggests areas meriting further study, including the susceptibility to numerical error of various knot insertion algorithms (in particular rewritten and factored versions) the details of some of the newer techniques
132
Knot Insertion and Deletion Algorithms
more precise computation counts and empirical studies of each algorithm's speed whether yet undiscovered knot insertion techniques exist the usefulness of algorithms arising from the discrete B-spline identities (4.4)-(4.7). Knot insertion is a fundamental tool in the theory and application of Bspline curves and surfaces. Here we have shown it is also an elegant, fertile and multi-faceted topic. The many different knot insertion algorithms, as well as the many issues surrounding their use, serve to enrich the theory of B-spline curves and surfaces. Acknowledgments This work was supported in part by Natural Sciences and Engineering Research Council of Canada grant OGP0036825 and by National Science Foundation grant CCR-9113239. References [1] P. J. Barry, De Boor-Fix functional and polar forms, Comput. Aided Geom. Design, 7 (1990), pp. 425-430. [2] P. J. Barry and R. N. Goldman, Recursive proof of Boehm's knot insertion technique, Comput. Aided Design, 20 (1988), pp. 181-182. [3] , Factored knot insertion, this volume, pp. 65-88. [4] P. J. Barry and R.-F. Zhu, Another knot insertion algorithm for B-spline curves, Comput. Aided Geom. Design, 9 (1992), pp. 175-183. [5] W. Boehm, Inserting new knots into B-spline curves, Comput. Aided Design, 12 (1980), pp. 199-201. [6] , On the efficiency of knot insertion algorithms, Computer Aided Geom. Design, 2 (1985), pp. 141-143. [7] W. Boehm and H. Prautzsch, The insertion algorithm, Comput. Aided Design, 17
(1985), pp. 58-59.
[8] C. de Boor, On calculating with B-splines, J. Approx. Theory, 6 (1972), pp. 50-62. [9] C. de Boor and K. Hollig, B-splines without divided differences, in Geometric Modeling: Algorithms and New Trends, G. Farin, ed., Society for Industrial and Applied Mathematics, Philadelphia, PA, 1987. [10] P. de Casteljau, Shape Mathematics and CAD, Kogan Page Ltd., London, 1986. [11] E. Cohen, T. Lyche, and R. F. Riesenfeld, Discrete B-splines and subdivision techniques in computer-aided geometric design and computer graphics, Computer Graphics and Image Processing, 14 (1980), pp. 87-111. [12] G. Fariri, Curves and Surfaces for Comput. Aided Geometric Design: A Practical Guide, Academic Press, San Diego, CA, 1988. [13] R. N. Goldman, Blossoming and knot insertion algorithms for B-spline curves, Comput. Aided Geom. Design, 7 (1990), pp. 69-81. [14] R. N. Goldman and Phillip J. Barry, Algorithms for progressive curves: Extending B-spline and blossoming techniques to the monomial, power, and Newton dual bases, this volume, pp. 11-64.
Knot Insertion Algorithms
133
[15] J. M. Lane and R. F. Riesenfeld, A theoretical development for the computer generation and display of piecewise polynomial surfaces, IEEE Trans, on Pattern Matching and Machine Intelligence, 2 (1980), pp. 35-46. [16] J. M. Lane and R. F. Riesenfeld, A geometric proof of the variation diminishing property ofB-spline approximation, J. Approx. Theory, 37 (1983), pp. 1-4. [17] A. Lee and T. T. Png, Speed and robustness of factored knot insertion scheme for B-spline curves, Tech. Report CS-90-07, Computer Science Department, University of Waterloo, Waterloo, Ontario, Canada, 1990. [18] E. T. Y. Lee, Some remarks concerning B-splines, Comput. Aided Geom. Design, 2 (1985), pp. 307-311. [19] , Yet another proof of knot insertion, Comput. Aided Geom. Design, 6 (1989), pp. 83-84. [20] , A note on blossoming, Comput. Aided Geom. Design, 6 (1989), pp. 359-362. [21] T. Lyche, A note on the Oslo algorithm, Comput. Aided Design, 20 (1988), pp. 353355. [22] , Discrete B-splines and conversion problems, in Computation of Curves and Surfaces, W. Dahmen, M. Gasca, and C. A. Micchelli, eds., NATO ASI Series C: Mathematical and Physical Sciences 307, (1990), pp. 117-134. [23] T. Lyche, E. Cohen, and K. M0rken, Knot line refinement algorithms for tensor product B-spline surfaces, Comput. Aided Geom. Design, 2 (1985), pp. 133-139. [24] T. Lyche and K. M0rken, Making the Oslo algorithm more efficient, SIAM J. Numer. Anal., 23 (1986), pp. 663-675. [25] K. M0rken, Contributions to the theory and application of splines, Dr. Scient. thesis, Institutt for informatikk, University of Oslo, Oslo, Norway, 1989. [26] H. Prautzsch, A short proof of the Oslo algorithm, Computer Aided Geom. Design, 1 (1984), pp. 95-96. [27] , Letter to the Editor, Comput. Aided Geom. Design, 2 (1985), p. 329. [28] L. Ramshaw, Blossoming: A Connect-the-Dots Approach to Splines, Tech. Report 19, Digital Equipment Systems Research Center, Palo Alto, CA, 1987. [29] , Blossoms are polar forms, Comput. Aided Geom. Design, 6 (1989), pp. 323358. [30] P. Sablonniere, Spline and Bezier polygons associated with a polynomial spline curve, Comput. Aided Design, 6 (1978), pp. 257-261. [31] P. V. Sankar, M. J. Silbermann, and L. A. Ferrari, Curve and surface generation and refinement based on a high speed derivative algorithm, submitted for publication. [32] H.-P. Seidel, Knot insertion from a blossoming point of view, Comput. Aided Geom. Design, 5 (1988), pp. 81-86. [33] , A new jnultiaffine approach to splines, Comput. Aided Geom. Design, 6 (1989), pp. 23-32.
This page intentionally left blank
CHAPTER
5
Conversion Between B-Spline Bases Using the Generalized Oslo Algorithm Tom Lyche, Knut M0rken, and Kyrre Str0m
5.1.
Introduction
Knot insertion or subdivision for B-spline curves is important both as a practical and theoretical tool in most discussions of splines. However, knot insertion often leads to redundant spline representations and this was the motivation for developing knot removal techniques [13]. An important ingredient in knot removal is to approximate a given spline on a given knot vector by a spline on a reduced knot vector. Both knot insertion and this part of the knot removal process are special cases of the more general problem of transforming a spline between B-spline representations on two arbitrary knot vectors. Of course, the only splines that can be transformed exactly in this way are those that lie in both spaces. It was announced in [11] that the Oslo algorithm, which was originally developed for knot insertion, can be generalized to implement such transformations. In §4 we give a simple proof of this result. A proof for the special case of knot removal can be found in [17]. Recently, this has also been proved by the technique of blossoming; see, e.g., [1], [18], and [20]. In knot insertion the transformed B-spline coefficients are convex combinations of the original coefficients. This makes the computations numerically stable. For general transformations the combinations are still affine, but the nonnegativity is lost. Hence, the question of numerical stability becomes a central issue. In §5 we consider the special case of removing knots from the Bernstein-Bezier representation of a spline. We give precise upper and lower bounds on the terms computed by the generalized Oslo algorithm. To gain some insight we study in §6 the special case where we remove all knots at all interior knot locations. We show that the algorithm is not stable for this kind of conversion. We then show that the algorithm is stable, as long as at least one knot remains at every knot location. Finally in §7 we consider more general transformation problems. We give an upper bound for the entities computed by the generalized Oslo algorithm. This upper bound can be used to determine 135
136
Conversion Between B-Spline Bases
stability in particular cases. We have also included a section (§3) which shows how the sum and intersection of two spline spaces can be represented nicely in terms of B-splines.
5.2. Problem Formulation and Background Material Let r and t be two nondecreasing sequences and k a positive integer. The spline spaces S£)T and §^ are defined as the linear span of the B-splines of order (degree plus one) k on the knot vectors r and £, i.e.,
We define §^,T — {0}, the zero function, in case the knot vector r contains less than k + 1 elements. If / £ %,Tn%,i is a spline in the intersection between the two spline spaces, we have
for suitable coefficients c = (GJ) and 6 = (&;). The coefficient vectors c and 6 are related by the equation
where the matrix A has elements a^. That the relation (5.1) should hold for all splines in the intersection S>k,r H S/^f may or may not determine A uniquely. If S^.)T C S^ a nd no knots in r occur with multiplicity higher than fc, then A is unique since (5.1) defines its behaviour for all c in %)T H S^ which in this case is S& ;T . The matrix A is therefore the B-spline knot insertion matrix whose elements have properties very similar to the B-splines {Bj^,T}- If> °n the other hand, S/t jT is not a proper subspace of S/^ there are many matrices that satisfy (5.1) since two such matrices may differ in their behaviour on the splines in S/,%T that are not in §£)t. This does in fact show that A is unique if and only if S^.)T C S/^. In this paper we will study methods for computing b when c is given in the case when §k;T £ §^, i.e., when A is not unique. The natural tool for developing conversion formulas is dual linear functionals since they give us a formula for a B-spline coefficient of a spline in terms of linear operations on the spline such as evaluation, differentiation and integration. Linear functionals dual to B-splines can be given in many forms. We recall a few facts about the de Boor-Fix functionals which will be used here. A set of linear functionals {A;} are dual to a set of functions {fj} if A;(/j) = ^ Z ) J . Note that if / = Z)j c j/J5 then duality implies that the coefficients Cj of / are given by the rule Cj — Aj(/). For this reason dual linear functionals are also called coordinate functionals. The de Boor-Fix dual linear functionals for
Knot Insertion and Deletion Algorithms B-splines of order k uses the bilinear form (f,g)(x)
137 defined by
Here / and g are two sufficiently smooth functions, a: is a point in their common domain of definition, and A; is a positive integer. LEMMA 5.2.1. Suppose that the spline f in Sfc )T has the E-spline expansion f = X^ cjBj,k,r- Then the coefficient c3 is given by the rule
where Uj is an arbitrary number in (TJ, TJ + ^), and ^j^ polynomial
— i>j,k,T is the
Proof. See, for example, [4]. This lemma says that the functional Aj(/) = {/, V 7 j,fc)(%') w ^h Uj G (TJ, Tj+k) satisfies \i(Bj^} — ^'j, i-e., is dual to the B-splines BJtk- Note that there are many other representations for dual functionals; see, e.g., [3] and [15]. In this paper the norm \\x\\ will always denote the max-norm. This is defined to be max; |xz- if x is a vector and maxie/ x(t)\ if £ is a function defined on the set /. If x is a matrix then the max-norm ||x|| of x is defined to be max; £]j Xij , i.e., the largest possible sum of absolute values of elements m a row. 5.3. Sum and Intersection of Spline Spaces We are considering linear transformations from one spline space §j.)T to another §^,i such that for a spline / in the intersection §^)T fl S/^ the transformation is simply a change of basis. In the proposition below we give a simple characterization of the intersection §^)T fl §£,*• We also give a related characterization of the sum S^)T + S/^ which consists of all possible sums of splines from
This is the smallest vector space that contains both S/^ and % )T . In this paper we will use notation like T U {x} for the knot vector obtained from r by increasing the multiplicity of x by one. Similarly, if x is a knot in r, we may denote by r \ {x} the knot vector obtained from r by reducing the multiplicity of x by one. Along the same lines we can define the knot vectors r A t and r V t as follows: Let mr(x] denote the total number of times the number x occurs in the knot vector r. Then we have
138
Conversion Between B-Spline Bases
We observe that PROPOSITION 5.3.1. The intersection between the two spline spaces §^)T and §£,< is given by // r A t contains at least k elements then the sum of the two spline spaces is given by i.e., a spline f is in §A;,rvi if and only if it is a sum of a spline fr from S^)T and a spline ft from §&,*• 7 / r A t contains exactly k elements then ^k,r^^k,t = {0}, and the decomposition f = fT + ft Z5 unique. Proof. We start by proving (5.6). Note that we always have
since r A t is a subsequence of both r and t. It is therefore sufficient to prove that To this end we introduce some auxiliary knot vectors f and t. Let a and 6 be real numbers strictly smaller and strictly greater than any knot in both r and £, and construct r and t by inserting a and b as &-tuple knots in T and t. Note that then we have We first show that S^ D Sj. $ C §fc f At~. Suppose that / £ §j. f A f. Then / = 0 outside the interval [a, 6]. Inside [a, 6] we see that / is a piecewise polynomial which at a point x has at least max{A; — 1 — tniaU(x}ik— I — m^(x")} continuous derivatives. But by the definition (5.4) of the knot vector r A t it follows that this is also the exact characterization of a spline in §k f A£. Then let / be a spline in §^,r ^ ^M- Then we have / £ §^,f H Sfc £ and by (5.9) and what we have just proved we must also have / £ S^. _^. Since / is identically zero on open intervals containing a and b we must therefore have
/ e Sfc.rAt-
To prove (5.7) we note that we trivially have
It is therefore sufficient to show that the two spaces have the same dimension. Denote by #T the number of elements in r, then we have dimS^. T = #T — k. The dimension formula for the sum of two vector spaces gives
Knot Insertion and Deletion Algorithms
139
when #(r A t) > k. We also have
Thus we see that the dimension of the two spaces is the same and therefore §fc,T + §M = s * , T V t i f # ( T A t ) > k. If #(T A i] = k then §£,r H S/^ = {0} by definition and the uniqueness of the decomposition (5.7) follows from standard linear algebra. 5.4.
The Generalized Oslo Algorithm
In this section we show that the Oslo algorithm can be used to transform a spline in B-spline form from one spline space to another even when the knot vectors are completely arbitrary. Let two knot vectors r and t and the polynomial order k be given. Then, given an integer i let p, be the integer such that If T C t then aij — otj^(i} in (5.1) is given by the recurrence relation
where aj,! = <$J)M and
This is the recurrence relation which forms the basis for the Oslo algorithm [6]. The numbers 0^(2) are called discrete B-splines. It is easily seen that a3^(i} = 0 for j' < // — k and j > \JL. Thus the integer /i determines which entries in row i of A that can be nonzero. We shall see below that JJL given by (5.10) is not the only possible choice. To this end we make the following definition. D E F I N I T I O N 5.4.1. Let r and t be arbitrary knot vectors, and let i and /j, be given integers. Then the number a^k — ct^k T t is defined by
and for A; > 2. Obviously, if p, is given by (5.10) then a^\ = a^, and the Oij^(i} computed by (5.11) equals rfk(i) computed by (5.14). For other values of // in (5.14) the two may be different.
140
Conversion Between B-Spline Bases
Definition 5.4.1 makes sense for any knot vectors r and t as long as they contain at least k + 1 and k — 1 elements, respectively. Moreover, we have cx.^- k ( i ] = 0 for all i if TJ+^ = TJ. In the rest of the paper we will assume that T (but not necessarily i) contains at least Ik — 2 elements, and that fi is such that This causes no loss of generality. For if (5.15) is not fulfilled we can extend the knot vector r to f by adding knots at both ends of r such that r satisfies (5.15), and TJ = TJ for j = 1, . . . , #r, where #T is the number of knots in T. All properties shown for then also apply to values of j. The nonzero of the form
for
these
can always be computed in a triangular scheme
by repeatedly applying the recurrence relation (5.14). For fixed i and // at most k of af k(i) are nonzero. This will be used repeatedly below so we record it in a lemma. LEMMA 5.4.1. Let i and [i be fixed integers. Then at most k of the numbers {ot^k(i)}j are nonzero. More specifically,
Proof. Use induction on k. D The following theorem shows that for certain choices of// the {ott k(i}} can be used to transform a spline from §&)T to %^, i.e., they produce a matrix that satisfies (5.1). THEOREM 5.4.2. Let i be a fixed integer and suppose that the integer fj, = fa is such that Let the matrix A have its ith row given by the vector (0^(2)) -, i.e.,
Then any spline in §J;,T n§jt,t with E-spline coefficient vector c in §^)T and b in §£,t satisfies b — Ac. Note that we can use a different fi — fa for each row of A. We write
where \i — (fa) is the vector of // z 's used to generate A. A natural question is whether all transformations from S^)T to §/^ that satisfy (5.1) can be
Knot Insertion and Deletion Algorithms
141
represented by matrices generated by the generalized Oslo algorithm. In general, the answer is negative. Consider, for example, removal of one knot. In this case one transformation matrix is the pseudo-inverse of the matrix that reinserts the knot, and this pseudo-inverse will in general be a full matrix. In contrast, we see from Lemma 5.4.1 that the matrices produced by the generalized Oslo algorithm has at most A; nonzero elements in each row. Theorem 5.4.2 provides us with a method for converting a spline in Bspline form from the r-representation to the ^-representation by using the matrix A — A^t. By interchanging the roles of r and t we can also convert from t to r with a matrix A\T. Observe that the condition (5.17) does not in general determine // uniquely. However, if r C t then as noted above, the matrix A is unique and ot^k(i] must be independent of the choice of// as long as (5.17) is satisfied. Thus the choice (5.10) is not the only choice of// which can be used in the original Oslo algorithm. Just as in the case of knot insertion several algorithms can be deduced from Theorem 5.4.2. For example, to compute bl given by (5.1) we can start by setting ck} — c.j for j = // —fc + 1, ^ — k + 2, . . . , / / and generate the triangular scheme
where The coefficient b{ is then given by bx — c1^. This algorithm corresponds to what is usually called Oslo algorithm 2. We can also generate the triangular scheme (5.16) and then compute
This Oslo algorithm 1 requires fewer arithmetic operations than Oslo algorithm 2 for tensor product calculations; see [7]. For knot insertion it is known that all the Uj^(i) corresponding to nonzero a's are nonnegative, which means that the combinations in (5.19) are convex, and this is known to be very stable numerically. If r is not a subsequence of t then some of the a M k ( i ] can be negative and the stability of the algorithm can no longer be taken for granted, see the following sections. The proof of Theorem 5.4.2 is based on two lemmas. The first result that we need is a discrete version of Marsden's identity. The original Marsden's identity,
142
Conversion Between B-Spline Bases
is fundamental in the theory of splines. The following lemma was proved in [10] under the condition r C t. LEMMA 5.4.3. //"(5.15) holds then the polynomial V\',M can for any i be expressed in terms of the polynomials tpj,k,r with the OL^k(i) as coefficients,
Proof. The proof is completely analogous to the classical proof of Marsden's identity. We therefore just give a quick sketch here. The proof is by induction on k. The case k = 1 is trivial, while the induction step follows by applying the recurrence relation (5.14) to (5.20) and noting that a^_k+1 ^_i(0 = 0 and o^+1 k_i = 0 by Lemma 5.4.1. D The next lemma shows that there is a close connection between the de Boor-Fix functional and the oftk(i). LEMMA 5.4.4. The number oftk(i) is the same as applying the ith de BoorFix dual functional of order k on t to the jth B-spline on r. More precisely,
Moreover, if f = £)• cjBj,k,r then
Here t^i^,t is given by (5.3) and ( , ) is defined in (5.2). Proof. Applying the operation (Bj^,T-> •) °n both sides of (5.20) and using linearity we obtain
for any y. Choose y = U{ to be a number in (TM, T^+I); then we have
Therefore, by biorthogonality {-Sj^r,Vv^/rX^) = fij,t->and (5.21) follows. Equation (5.22) is an immediate consequence of (5.21). D With these building blocks we are now ready to prove Theorem 5.4.2. Proof of Theorem 5.4.2. Let / be a spline in §^)T D §fc,t with B-spline coefficients c in %)T and b in S^. Choose Ui to be a number in (r M ,r M+ i) fl (ti,ti+k). Using Lemma 5.2.1, linearity, and Lemma 5.4.4, we obtain
Knot Insertion and Deletion Algorithms
143
Thus we have proved that the generalized Oslo algorithm can be used to transform a spline in the intersection between two spline spaces from the Bspline form in one of the spaces to the B-spline form in the other. But what does the generalized Oslo algorithm do with a spline that is not in the intersection? Suppose we are converting from Sj.)T to S/^. According to Lemma 5.4.4 the generalized Oslo algorithm produces the coefficients of the same spline as does the de Boor-Fix operator. This operator uses the dual functionals (-, 4\,k,t] to compute the B-spline coefficients relative to the t knot vector. In knot insertion the order of the discrete B-spline a^k(i] can be reduced when there are some knots from r among t;+i,.. . ,i;+fc_i; see [12]. This can be generalized to the more general setting that we consider here. For this we need a linear independence result for the polynomials V'j.fc.TL E M M A 5.4.5. // TM < T M+ I ; then the polynomials |V;j,/c}^- -k+i are linearly independent. Proof. We must show that if p(y] = X^= -A.-+I c j'V ; j,A;(y) = 0 then CM_£._|_-I — • • • = CM = 0. If all the knots are simple this follows by setting y to r;i, T M + I, . . . , r^+k_i in that order. For then we obtain a nonsingular, homogeneous, triangular linear system for the c's. If one of the knots occurs twice, say r M+g = r M+g+1 , we find c M + 9 _fc - c^+q+i-k = 0 by considering the two equations p(r^+ g ) = 0 and p'(T^+q) = 0. Higher multiplicities can be treated similarly. D The proof does in fact show that the conclusion of the lemma holds as long as none of the numbers in {TJ}^_ _k, 2 occur more than k — 1 times. Lemma 5.4.5 guarantees that an expansion in terms of the polynomials ; {V .M'Xi=H-A,-+i is unique. This is used in the proof of the next result which reduces the complexity of a1*k(i] in certain cases. T H E O R E M 5.4.6. Let i and n be given integers and suppose that r^ < r M + i. Suppose also that for some nonnegative integers I and r the polynomial n^—u-/+i (y ~~ r j) is a factor in V ; ?,^,i- Then the identity
holds for all j, where
Proof. Suppose first that only q(y] = (y — T^} is a factor in V;i,A;,^ i n other words that / = I and r — 0. Now q(y] is also a factor in V;J,/C,T f°r j = H — k + 1, . . . , I.L — 1, but not in VV,A;,T- By letting y = rM in (5.20) we find that
This means that (5.20) reducesto
144
Conversion Between B-Spline Bases
where r' = T\{T M } and t' = t\{r^}. Now we have T'^ — T^ = r^+i —r^-i > 0, so by Lemma 5.4.5, the representation in (5.24) is unique. We must therefore have otj^Tj(i) = aj^-i,T>j'(i) f°r 3 = ft - k + 1, . . . , / / - 1. In addition we found a^ k T t(i) — 0 so that the result follows for all j because of Lemma 5.4.1. If / > 1 the above argument can be repeated the required number of times. Similarly, if r > 0 the knot TM+I can be removed first and the argument applied inductively. For knot insertion the a's are independent of // and we can remove any knot occurring in both /;+i,.. . ,£ z -+&_i and r. For general transformations we have to be more careful and only remove consecutive r's around rM. The discrete B-splines in knot insertion are sometimes defined as the coefficients when we express one B-spline on the coarse knot vector r as a linear combination of B-splines on t. In symbols
For more general transformations it is not certain that BJ^!T G §fc,t, and such a relation is therefore usually not possible. However, by considering only one polynomial piece of a B-spline we have the following lemma. LEMMA 5.4.7. Let B^kr be the polynomial representing Bj^^ on the nonempty interval (T M ,T M +I). Then
Proof. If Ui e (r^, rM+1) we have
by Lemma 5.4.4. Since the functionals {{•, V't.fc.t}} are dual to the B-splines {Bi,k,t} we therefore obtain (5.25). We end this section with a result which can be useful when determining the a's explicitly. LEMMA 5.4.8. Suppose i and // are integers so that (5.15) and (5.17) hold. Then
Moreover the first and last possibly nonzero a.^ k are given explicitly by
Proof. The results are direct consequences of (5.20). The equation (5.26) follows by considering both sides of (5.20) as a polynomial in y and comparing coefficients of the highest degree term. For the explicit formulas we evaluate (5.20) at y = r M+ i and y = TM, respectively.
Knot Insertion and Deletion Algorithms 5.5.
145
Bezier to B-Spline Conversion
An important conversion problem is to find the B-spline representation of a piecewise polynomial. Suppose / is a piecewise polynomial with breakpoints (£;) with £o < £1 < • • • < &+i for some positive integer £, and that we know the Bernstein-Bezier form of / on each interval ll = [£;, ^-+1] for i = 0,1, . . . , £ :
for x E I{. In addition, suppose we know the continuity between each polynomial piece
so that p E CVl at £j for some nonnegative integer V{. By the Curry-Schoenberg theorem [19, Thm. 4.9], we know that / E §k,t where t = ( t i ) f ^ f is any knot vector containing the sequence
as a subsequence. Here we allow V{ — k — 1, i.e., the possibility of removing all knots at <^. Now, the piecewise Bernstein-Bezier form of / is also a B-spline representation on the knot sequence
Indeed, we have / = ^j=i cjBj,k,r with c^+j+1 = dij. We seek b = (bl)1^l so that / = 53F=i biBi^j. Note here that it would be sufficient for the interior knots of r to have (k — l)-fold multiplicity, but that would make the indexing of the Ci slightly more complicated. The natural approach to this problem is to use the generalized Oslo algorithm. For each integer i, we can choose a suitable \i — \i% and set 6Z- = c^ where c^ is generated by the triangular scheme (5.18). This is a very simple O(mti2] algorithm for finding the B-spline form from the Bernstein-Bezier form. As was mentioned above, the numerical stability of the generalized Oslo algorithm is not obvious. For the algorithm to be stable it is necessary that the Q^fcCO d° n°t become too large for any i or j since otherwise serious cancellation may occur. Equivalently, the norm of the matrix A with elements a,ij = o.^h(i) must not be too large; see the introduction in [14]. (Recall that in this paper we use the max-norm.) Not surprisingly, the choice of the parameter H is important for it may influence numerical stability of the generalized Oslo algorithm.
146
Conversion Between B-Spline Bases
The following theorem gives upper and lower bounds for the a's when we convert from the piecewise Bezier form to an arbitrary B-spline form. Note that by the notation ||/||ra M we mean the max-norm of / on the interval [a, 6]. THEOREM 5.5.1. Suppose t is arbitrary and r is given by (5.28). Let i and /j, = m be such that (5.15) holds. Then
where
The constants dk are independent oj T andt. In particular, d% — 1, d% < 3, d4 < 5, and in general d^ < k'2k~3/2/(k — 1). Proof. For this r we have
Set a = r^, b = Tp,+i and N = k — I . Then
where B^ is the Bernstein-Bezier basis relative to [a, 6],
The discrete Marsden identity (5.20) can now be written in the form
where
Since (5.31) is a B-spline expansion we can bound the norm of the function expanded from below and above by the norm of its coefficients. In general [2], if / G §A;,T has B-spline coefficients c then
where the constant K can be bounded independently of the knot vector r. In (5.31) we have the special case of a Bernstein-Bezier expansion so that a bound for K, is as given in the theorem; see [91.
Knot Insertion and Deletion Algorithms
147
This result can be expressed in terms of matrix norms. COROLLARY 5.5.2. The norm of the transformation matrix A^. t is bounded
by Here /3 is the vector (fa) and fa is given by (5.30). Proof. Using (5.29) the sum of the elements in the ith row of A^t can be bounded as follows,
The sum evaluates to 1k~l which completes the proof. 5.6.
Some Examples
We continue to look at the conversion from the Bernstein-Bezier form to a B-spline form and give an example to show that a^k(i] can be arbitrarily large if we remove many knots. PROPOSITION 5.6.1. Suppose that r is given by (5.28) with & = i/(i + 1) for i = 0, 1, . . . , t -\- I , and t are the Bernstein knots
Then for i = I , 2, . . . , k we have (5.33) where upper bounds for the constants d^ are given in Theorem 5.5.1. Here we have a grid on [0,1] with spacing h = l/(^+ 1) and r has a &-tuple knot at every grid point. To obtain t we remove all the kt interior knots at £1, . . . , & • For 1 < i < k the lower bound grows without bounds as t increases. Proof. Consider the bound (5.29). To bound the constant fa we observe that r M+ i - rM = h - l/(i+ 1) for all (7^,7^+1) C (^,^+A,-) = (0,1), with TM < T M +I. Since (r^r^+i) = ( j h , ( j + l)/i) for some integer j, the /i which minimizes fa is obtained directly from the integer j for which the minimum in
is attained. Because of the special shape of the functions
the minimum in (5.34) must be attained either at the right end of the first interval (j = 0) or at the left end of the last interval (j — £}. The minimum is
148
Conversion Between B-Spline Bases
therefore equal to the smallest of the two numbers \4>i,k,t(h}\ and |V>;,MO — ^)|Now we have
Since 1+1 = l/(r M+1 - r M ) wefindthat & = e^^-^-i} so ( 5>2 g) simplifies to (5.33). D It is instructive to look at the quadratic case k = 3 in more detail. For this simple example we can find the o?'s explicitly. Using Lemma 5.4.8 and the special values we obtain and for i = 1, 2, 3 we find
For each integer i we want to select \i to minimize the size of the a's. For i = 1 we choose rM = 0 and rM+i = h and obtain
For i = 3 the best choice is rM = 1 - h and rM+i = 1 giving
Finally, for i = 2, we can choose // at either end of the range. Letting rM = 0 leads to
while rM+i = 1 gives
These exact values compare nicely with the bounds (5.33). The Q'S in (5.37) and (5.38) are large if t is large. As was mentioned in the previous section this may lead to serious cancellation. Let us therefore consider some alternative methods for converting between the knot vectors r and t given in Proposition 5.6.1. Let / be quadratic polynomial with B-spline coefficients c on the uniform Bernstein-Bezier knot vector r and B-spline coefficients (&i)f=1 on the Bernstein-Bezier knot vector t without interior knots. A natural way to find the 6's would be to compute them by interpolation from the values of
Knot Insertion and Deletion Algorithms
149
/ at each end of the range, and the derivative at one end. For the end values we find (5.39) where c\ and cn are the first and last B-spline coefficients of / on r. For the derivative at the left end we obtain
by differentiating with respect to the two knot vectors. This leads to
We see that this is exactly the same as computing 62 from (5.37) and therefore just as unstable. The reader might wonder if there is a more stable way to find 61, &2 7 and 63 than the methods considered so far. We will discuss this in more generality in a forthcoming paper. Here we just give some suggestions. The basic strategy is to use a different form of the dual linear functionals. For the quadratic case the formula [2]
is attractive. From this we find in our case 61 and 63 as in (5.39) and
See [15] for other such formulas. Another possibility is to choose three points #1, x^, £3 in [0,1] and compute &i, &2 5 ^3 by interpolating at these points. We then need to solve a 3 by 3 linear system of equations. This is essentially what we did above by choosing the x's to be the points 0, 0, and 1 corresponding to value and derivative interpolation at 0 and only value interpolation at 1. A better choice is to choose the x's to be 0, ^, and 1, the extrema of the Chebyshev polynomial 8x 2 — 8x + 1, shifted to [0, 1]. In our simple case this system can be solved exactly, and not surprisingly the solution is given by (5.39) and (5.40). It can be shown that the condition number of the linear system is bounded by 3, and this algorithm for finding the 6's will be stable, regardless of the size of I . For the general conversion problem, a stable method is known, namely to compute the 6's by interpolating at the extrema of the so-called Chebyshev spline of order k on t. It was shown in [16] that this choice of interpolation points leads to a linear system with a minimal condition number. Moreover, this condition number can be bounded independently of the knot vectors. We refer to [16] for more details. It should be mentioned that currently this method is not very practical since these extreme points are difficult and expensive to compute in general. Programs for computing the extreme points can be found in [5] and [8].
150
Conversion Between B-Spline Bases
Yet another possibility is to use a least squares approach. Here we would solve the linear least squares problem min ||W(A6 — c)\\2 for 6, where A — At,r is the knot insertion matrix from t to r, and W is a diagonal scaling matrix with diagonal elements (TJ+& — Tj) 1 / 2 . This is also a good method for approximate conversion, i.e., to find a spline g 6 §k,t which approximates an / which lies in §£;T, but not in %,*• Details of this method can be found in [13]. So far we have only used Theorem 5.5.1 to give negative results and describe situations where the generalized Oslo algorithm gives numerical problems. We end this section by proving that if no knot is removed completely from the Bernstein-Bezier knot vector, then the generalized Oslo algorithm is stable. THEOREM 5.6.2. Suppose that r is given by (5.28) and that t is a subsequence ofr containing the knots (5.27), where we assume that V{ < k—\, i.e, there is at least one knot remaining at every set of k knots in r. Let for each i the integer /j,i be such that
and set y, — (y^). Then the norm of the transformation matrix A^ t is bounded by (5.41) and dk-2 is given in Theorem 5.5.1. Proof. Fix the integer i and set // = //;. Suppose first that i;+1 = • • • = ti+k-i- Then all of those knots are shared between r and t so we can apply Theorem 5.4.6. Define r' and t' by
This gives //' = // if ti+i — t{ > ti+k — V^-i and /j,' = fj, — k + 1 otherwise. In either case we have for all j. Suppose then that ti+i < ti+k-i- The choice of \i in the theorem ensures that
for a suitable value of s. This means that both rM and T^\ must be roots in V^'.M so tna^ by Theorem 5.4.6 the order can be reduced by two. We set
We can then consider T' to have k — 2-tuple knots at r', and T' ,+l and we have T'H'+I - Tl' = TM+I ~ TM = ^i,k- Theorem 5.4.6 now gives
Knot Insertion arid Deletion Algorithms
151
while Theorem 5.5.1 gives
for r = 0, 1, . . . , k — 3. To complete the proof we need to estimate the norm of V ; i,A,--2,t' on the interval [i z - +s ,i z - +s+ i] of length A^.. Note here that s satisfies 1 < .s < k — 2. For all y E [ti+s,ti+s+i) C [/• + s _ 1 ,<; + s ) we have
Combining this with (5.42) gives
and all the other a^k(i) are zero. We therefore find
which is the required result. D Note that for the cubic case k — 4 we obtain the bound 4 for the norm of the transformation matrix computed by the generalized Oslo algorithm. 5.7.
An Upper Bound for the General Case
So far we have only obtained upper and lower bounds on the a M k ( i ] in the case when T is a Bernstein-Bezier knot vector. In this section we give an upper bound in the general case where r and t are arbitrary knot vectors. The key to this result is the following lemma which shows that the matrix A^ t can be factored in a useful way. L E M M A 5.7.1. Suppose p,r, and t are knot vectors with r C p. For each B-spline. B^k,t on t suppose that m and v% are integers such that
Set v = (vi] and /i = (/^). Then
152
Conversion Between B-Spline Bases
Proof. Since r C p we have
Therefore, for any
we have
by (5.21) which is the required result. Note that if the support of the B-spline Bi^^t is disjoint to the interior of r then it is not possible to find integers ^ and v± that satisfy (5.43). Lemma 5.7.1 can be extended to cover such cases by augmenting the matrices in (5.44) by zeros in the appropriate places. By letting p be the knot vector obtained by making all the knots in r fc-tuple we obtain a nice result. THEOREM 5.7.2. Letr andt be two arbitrary knot vectors. Then the norm of the matrix A^ t is bounded by
where j3 = (/3;) and fa is given by (5.30). Proof. Let p denote the knot vector obtained by increasing the multiplicity of all the knots in r to k. Then the matrix AT^p is a knot insertion matrix and we have From this and Lemma 5.7.1 we obtain
To bound ||-4p)t|| we use Corollary 5.5.2 with p playing the role of r. But since each (p,,t, Pvt+i) is equal to (r M , r M +i) for a suitable integer ^ it does not matter whether we use p or r to define the /3's. D With the upper bound provided by Theorem 5.7.2 it is possible to determine that the generalized Oslo algorithm is stable in some cases, typically when the knot spacing in the two knot vectors r and t is not too inconsistent. To determine whether the algorithm is unstable for given knot vectors a lower bound on H ^ J j is required and this seems like a more difficult task. References [1]
P. J. Barry and R. N. Goldman, Algorithms for progressive curves: extending Bspline and blossoming techniques to the monomial, power, and Newton dual bases, this volume, pp. 11-64.
Knot Insertion and Deletion Algorithms [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]
153
C. de Boor, On umform approximation by splines, J. Approx. Theory, 1 (1968), pp. 219-235. , On local linear functionals which vanish at all E-splines but one, in Theory of Approximation with Application, A. G. Law and B. N. Sahney, eds., Academic Press, New York, 1976, pp. 120-145. , A Practical Guide to Splines, Springer-Verlag, New York, 1978. , The exact condition number of the E-spline basis may be hard to determine, J. Approx. Theory, 60 (1990), pp. 344-359. E. Cohen, T. Lyche, and R. F. Riesenfeld, Discrete B-splmes and subdivision techniques in computer aided geometric design and computer graphics, Computer Graphics and Image Processing, 14 (1980), pp. 87-111. E. Cohen, T. Lyche, and K. M0rken, Knot line refinement algorithms for tensor product E-splme surfaces, Comput. Aided Geom. Design, 2 (1985), pp. 229-235. S. Glaerum, Condition numbers for the B-sp/me basis, Cand. Scient. Thesis, University of Oslo, Oslo, Norway, 1989. T. Lyche, A note on the condition number of the B-spline basis, J. Approx. Theory, 22 (1978), pp. 202-205. , Note on the Oslo algorithm, Comput. Aided Design, 20 (1988), pp. 353-355. , Discrete E-sphnes and conversion problems, in Computation of Curves and Surfaces, W. Dahmen, M. Gasca, and C. A. Micchelli, eds., Kluwer Academic, Dordrecht, the Netherlands, 1990, pp. 117-134. T. Lyche and K. M0rken, Making the Oslo algorithm more efficient, SIAM J. Numer. Anal., 23 (1987), pp. 663-675. , A data reduction strategy for splines with applications to the approximation of functions and data, IMA J. Numer. Anal., 8 (1988), pp. 185-208. , How much can the size of the B-spline coefficients be reduced by inserting one knot?, this volume, pp. 155-178. T. Lyche and L. L. Schumaker, Local spline approximation methods, J. Approx. Theory, 15 (1975), pp. 294-325. K. M0rken, On two topics in spline theory: Discrete splines and the equioscillating spline, Cand. Real. Thesis, University of Oslo, Oslo, Norway, 1984. , Contributions to the theory and applications of splines, Ph.D. dissertation, University of Oslo, Oslo, Norway, 1989. L. Ramshaw, Blossoming: A Connect the Dots Approach to Splines, Digital Systems Research Center report 19, Palo Alto, CA, 1987. L. L. Schumaker, Spline Functions: Basic Theory, John Wiley fe Sons, New York, 1981. H. P. Seidel, A new multiaffine approach to B-splines, Comput. Aided Geom. Design, 6 (1989), pp. 23-32.
This page intentionally left blank
CHAPTER
6 How Much Can the Size of the B-Spline Coefficients Be Reduced by Inserting One Knot? Tom Lyche and Knut M0rken
6.1.
Introduction
Knot insertion has become an important tool in working with splines. There are several reasons for this. By inserting knots we obtain more flexibility for design purposes. Another purpose of inserting more knots is to make sure that the B-spline coefficients model the function accurately. The success of this approach is due to the fact that as more knots are inserted in a fixed spline and the knot spacing goes to zero, the B-spline coefficients converge to the spline; see [4] and [5]. Let us consider this in more detail. When extra knots are inserted in the knot vector r of a spline, the B-spline coefficients b of the spline on the refined knot vector t can be written as a convex combination of the B-spline coefficients c on r. Specifically, if k is the order (degree plus one) of the piecewise polynomials that make up the spline, then
This implies that the max-norm of the coefficients will be reduced,
where A = A^^^t = ( a j ^ ( i ) ) is the transformation matrix such that b = Ac. The fact that bi is a convex combination of the GJ is often taken as a proof that knot insertion is a numerically stable process. However, it should be observed that the Cj can have alternating signs, and then we have no guarantee that the computed coefficients b will have small relative errors. Specifically, suppose b is the result of computing the matrix vector product b = Ac in floating point arithmetic. By a standard backward error analysis [13] it can be shown that
155
156
Knot Insertion Conditioning
for some matrix E of small size. Moreover, an upper bound for E can be given in terms of the order k and the round-off unit of the computer. (The size of this upper bound is not important for this discussion.) Choose any vector/matrix norm || • ||. Since we can bound the norm of b — b in terms of the norm of E and the norm of c,
Dividing both sides by ||6|| = \\Ac\\ we obtain the following bound for the relative error,
From this we see that unless ||c||/||.4c|| is small we cannot guarantee that the relative error in b is small. This is a motivation for studying the number
Note that we always have t>k,T,t > 1?and if A had been a square nonsingular matrix this supremum would be the condition number of A with respect to inversion. Thus, we see that f>k,r,t ls a generalization of the usual condition number for square matrices (with respect to inversion). From (6.1) it is clear that ||-Ac|| < ||c|| and the first supremum isone. Therefore &k,r,t reduces to
and we obtain the following bound for the relative error in 6,
Thus if 6k,r,t is small then the relative error in 6 will be of the same order of magnitude as the perturbation E in A which in turn would mean that knot insertion is a well-conditioned problem. It should also be noted that bk,T,t measures the conditioning of the inverse problem, that of removing knots. In [10] we gave an upper bound on $k,r,t in the case where t is obtained by inserting One new knot in r. This was to bound the condition number of a certain linear/ system of equations. Also, in [7] the authors consider the problem of removirig one knot for the purpose of assessing the quality of a curve. Again the question of conditioning is of interest. The content of this paper is as follows. In §2 we recall [3] the definition of the condition number Kk,r f°r the B-spline basis. We show that Kk,T provides
Knot Insertion and Deletion Algorithms
157
an upper bound for the number &k,r,t as defined in (6.3). Since (cf. [2] and the next section) K^^ can be bounded independently of the knots, this means that &k,r,t caT1 always be bounded by a constant which only depends on k. The bound for /tfe iT grows exponentially, however, and it would be interesting to see if better bounds for <*>A;,T,« are possible. This is indeed the case when t is obtained from r by inserting one knot, and in the rest of the paper we consider this case in detail. In §3 we identify for an arbitrary r the coefficient vector c for which the supremum in (6.4) is obtained. In §4 we consider the size of $k,T,t in the case of a uniform knot vector r. We show that t>k,T,t — 0(\/^)- In §5 we study the Bernstein-Bezier case. We show that this is the worst knot vector in terms of maximizing 6^^, and that 8k,r,t < & f°r all integers k and all knot vectors r. In this chapter B-splines on a knot vector r will be denoted by #J,A,-,T (we assume that TJ < rj+/,. for all j). We normalize them to sum to one on any open interval on which they span the polynomials of order k. These B-splines span a linear space S^ )T , and a typical element of the space is the spline / = ^2j CjBj^^T. The numbers a-j^(i) in (6.1), in addition to forming a nonnegative partition of unity, have many properties similar to B-splines. For this reason we call aj^(i} — ajtk,T,t(i) a discrete B-spline of order k on t with knots r. These discrete B-splines span a space of discrete splines. A discrete spline b of order k on t with knots r is a vector with zth component given by bi = Ylj c j a j , k ( i ) - See [12] for further details and additional references on discrete splines. All estimates in this chapter will be given in terms of the max-norm for vectors and functions. For a vector x we have \\x\\ = max; xl , while if x is a function on an interval [a, b] we have \\x\\ — max ie r a ^] |#(t)|. 6.2.
Condition Numbers for Splines
The number 6^,T,t of the previous section is related to a number K^ )T called the condition number for the B-spline basis relative to the knots r. To define K/^ suppose that / 6 % ;T has B-spline coefficients c. Then [2]
Since B-splines form a nonnegative partition of unity, the first supremum is one, and we have
The number K/,. T measures how large B-spline coefficients a spline of unit norm can have. By an analysis similar to the one in the previous section it can be shown that K/, ;T is a condition number for evaluating a spline function, i.e., it measures the sensitivity of the value of a spline in % iT to perturbations in its B-spline coefficients.
158
Knot Insertion Conditioning We define the knot independent condition number K^ by
It is known that K^ grows exponentially with &,
The upper bound is due to de Boor [2], while the lower bound was shown in [8]. It has been conjectured [3] that K^ = 0(2k). Recall the definition (6.3) for &k,r,t- In order to compare $k,T,t and Kk,r it is convenient to define a family of norms on §^,r- Suppose / £ S^)T has B-spline coefficients c. For any knot vector t with ti+k > t{ for all i and which satisfies r C £, we define where A is the knot insertion matrix from r to t. Such coefficient norms were found useful in developing algorithms for knot removal of splines; see [10]. In terms of coefficient norms the numbers 8k,r,t i n (6-4) and Kk,r in (6.5) can now be written
We call &k,T,t the condition number for the discrete B-splines of order k on t with knots T. Since ||/|| < ||/||t for all / £ §^)T and all knot vectors r , t with r C £, we see that This inequality is sharp in the sense that 8k,r,t can be arbitrary close to K^ )T for a suitable t. This is a consequence of the fact that the B-spline coefficients converge to the spline they represent as more and more knots are inserted; namely that
where \t\ — max;(^+i — ti). Note also that (6.2) translates into \\f\\t < \\f\\r for r C t so that \\f\\t decreases monotonically to ||/|| as t is refined. We summarize the above discussion in the following proposition.
PROPOSITION 6.2.1. Let r be a fixed knot vector. Then
where the supremum is taken over all knot vectors t such that %)T C §fc t .
Knot Insertion and Deletion Algorithms
159
There is one more condition number that is naturally denned for discrete B-splines,
where #u denotes the number of elements in the sequence u. The number 6k clearly measures how much it is possible to reduce the size of the B-spline coefficients by inserting / new knots. By Proposition 6.2.1 we immediately obtain
In §5 we shall see that 6k
6.3.
is bounded by k.
Inserting One New Knot
Let us now return to the main question in this paper: How much can the size of the B-spline coefficients be reduced by inserting one new knot? We will give an answer to this question by studying the condition numberfik,r,T(j{s}as a function of 5 and r. We first recall from [1] the formulas for inserting one knot. Let r be a given knot vector, let / = ^2jcjBj,k,r be a given spline in §£ )T , and suppose a new knot 5 is inserted in r. For the purpose of studying <5fciT)TU{s} we mav assume that r contains at least Ik knots, for if r is shorter, we can extend it to have length Ik and extend / with B-spline coefficients that are zero. We assume for the purpose of analysis that r is indexed so that s is inserted in the nonempty interval [r^-i, T/,.). Let t denote the refined knot vector r U {s}. The B-spline coefficients b of / relative to t are given by
Here, the real numbers \ip and Xp are given by
Our task is to bound each Ci in terms of ||6||, i.e., the refined B-spline coefficient of largest absolute value. From (6.8) we see that it is in fact sufficient to bound
in terms of
since all the other C{ are identical to a bj. It follows that there is no restriction in assuming that r contains exactly Ik elements, T O , . . . , T
160
Knot Insertion Conditioning The knot insertion matrix for this problem now takes the form
This is a rectangular matrix with k +1 rows and k columns. We want to bound the number
It will be convenient to introduce the variables
for p — 1 , 2, . . . , k — 1. From (6.8) we can deduce the two formulas
In these recursive formulas we choose CQ = &o and Ck-\ — b^. We then obtain two explicit representations for the c's in terms of the 6's,
for p = 0, 1, . . . , k— I . Taking absolute values we end up with the two bounds
If we make the definitions
Knot Insertion and Deletion Algorithms
161
then (6.12) can be written
for any spline / E §A,-,T that has B-spline coefficients c relative to r and b relative to r U {s}. In the proposition below we show that the constant in (6.14) is the best possible by exhibiting an extremal spline for a given r and s for which (6.14) reduces to an equality. PROPOSITION 6.3.1. For given r and s let /^.)S be the spline
where H^ is given by (6.13). Then
Proof. From (6.14) it is clear that
Also, since fk,s\\r — maxpe{o,i,...,A,--i} Hk(p-,s] the proof will be complete if we can show that ||/A;,S||TU{S} = 1- By the definitions (6.11) and (6.13) it is easily seen that Uk(p,s) increases with p and decreases with s, Vk(p,s) decreases with p and increases with 5, It follows that there must be an integer PQ = po(s) with PQ G { 0 , 1 , . . . , k — 2} such that the B-spline coefficients c; of //^ are given by
These B-spline coefficients are transformed to the knot vector 6 on r U {s} by the formulas (6.8), with ^ = X i / ( l + Xi] and A; — 1/(1 + Xi) (recall that Xi — ml\i). We first observe that |&o = |CQ| = 1 and |6^ = \Ck-\ = 1. Now from (6.8) we have
Therefore, since
162
Knot Insertion Conditioning
we see that
The proof will now be complete if we can show that |6po+i < 1. We find
By the definition of p0 we have Vk(p0 + l,s) < Uk(po + 1,5) so that by (6.19),
We must also have Uk(po,s) < Vk(po,s) which means that
by (6.19). Hence we have |& p o +i| < 1, and it follows that ||6|| = 1. The spline /^)5 has coefficients with interesting equi-oscillating properties. In this connection we recall that for the B-spline condition number (6.5) the spline of unit norm with largest B-spline coefficients is known. Indeed, we have (see [6], [11])
where Ck,ri the Chebyshev spline of order k on r, is the unique function in Sfc,T which satisfies ||Cyfc,T|| < 1 and Ck,T(£i) = ( —I)*" 1 for k points £ I , . . . , £ A ; with £1 < £2 < • • • < £k- (In general the number of equi-oscillating points equals the dimension of the spline space §/.)T.) The spline /£)5 has similar properties. From the proof of Proposition 6.3.1 there exists an integer PQ G {0,1,..., k — 2} such that /^)S can be written in the following form on the two knot vectors T = (r^)^"1 a n d £ = rU{,s}:
where \bpo+i < I . Now let b and c be the B-spline coefficients of //,.)S on t and r, respectively. Then
is of the form (6.1) with a t -_i^-(i) = A; and otitk(i] = fa. Thus, b = (6 Z ) defines a discrete spline with discrete B-spline coefficients c. Since ||6|| < 1 and b±- = ( —I)- 7 " 1 for k integers 0 = i\ < i-i < . . . < i^ = /?, we call b a discrete Chebyshev spline of order k on t with knots T.
Knot Insertion and Deletion Algorithms 6.4.
163
The Uniform Case
From Proposition 6.3.1 we can in principle work out the condition number fik,r(s) for inserting one knot for any knot vector r. In this section we consider the uniform case. Without loss of generality we consider the knot vector
As before we consider inserting a knot s in [r^_i, r^} = [0,1). The main theorem that we prove in this section is the following. THEOREM 6.4.1. The condition number 6 k,Tu(s) has asymptotic growth rate given by
The convergence is uniform for s £ [0, 1). This growth rate is in marked contrast to the growth rate in the worst case (that is, for 6^. '") which we shall see is linear in k. From numerical experiments it seems that in most cases the growth rate given by Theorem 6.4.1 is the most realistic one. The proof of this result is via several lemmas. Let us start by looking specifically at the quadratic and cubic cases. Note first of all that in the uniform case we have by (6.11) that
For k = 3 the extremal spline /3)S has three B-spline coefficients. Since the first and last coefficients are equal to 1, the largest coefficient must be the middle one, and we can write /3)S in the form
where
It is easily seen that w 3 (l,s) decreases from 3 to 1, and that v 3 (l,,s) increases from 1 to 3 as s varies from zero to 1. Moreover, we have ^3(1, ^) = ^3(1, |). We conclude that
A graph of <$3 )Tl/ (s) is shown in Fig. 6.1. We see that 3,T[7(s) increases from 3,Tt/(0) = 1 until ^3, rt /(^) = 5/3, and then decreases symmetrically to
<Wi)'=i-
Knot Insertion Conditioning
164
FIG. 6.1. The quadratic (k — 3) discrete condition number for uniform knots as a function of the inserted knot in [0,1]. Consider next the cubic case. We find
Here ^4(1,5) decreases from 2 to 1, while ^4(1,3) increases from 2 to 7 as s varies from zero to 1. It follows that
Similarly, we find
Since
the function
is now given by
The graph of
= 4 the largest value of b is obtained by choosing is shown in Fig s = 0, i.e., by increasing the multiplicity of an already existing knot. We observe that po = 1 for all s 6 [0,1]. We have also computed the B-spline representation of f^s on the refined knot vector t = (—3, —2, — 1 , 0 , 5 , 1 , 2 , 3,4) to see how the coefficient 6 po +i = 62 behaves as a function of s (see the proof of Proposition 6.3.1 for the definition of PQ}. We have
The function b2(s) is defined by (6.18) and is given by
Knot Insertion and Deletion Algorithms
165
FlG. 6.2. The cubic (k — 4) discrete condition number for uniform knots as a function of the inserted knot in [0, 1].
After some calculation we find
The value of 62(3) increases from 62(0) = — 1 to 6 2 (1) = 1 almost linearly. The situation for general k is very similar to what has been exemplified here for k = 3 and k = 4. For odd k we obtain the largest 6 by inserting a knot midway between two old knots. In the even case we have the largest 6 when the multiplicity of a knot is increased. The following result gives an explicit formula for the condition number as a function of s. PROPOSITION 6.4.2. Suppose that a knot s is inserted in the interval [0,1] of the uniform knot vector T\J . For k even the condition number is then given by
while for k odd
Here, the function u^ is given by
We start the proof of this result by taking a closer look at the functions Uk(p,s) and Vk(p,s) defined in (6.13). L E M M A 6.4.3. For the uniform knot vector r\j the functions u^ and v^ satisfy
166
Knot Insertion Conditioning
Proof. For this TU we find from (6.23) that We then obtain from (6.13) that
This proves (i). To prove (ii) we observe that Then
since l/xi(l) = 0. Hence, relation (ii) follows. Finally, by combining (i) and (ii) we find
which is (iii). Proof of Proposition 6.4.2. Fix an s in the interval [0,1). Recall from Proposition 6.3.1 that ^,7-^(5) = \\c\\ where c is the coefficient vector of the spline fk,s given by (6.21). Suppose first that k is even. We claim that PQ = k/2 — 1. To show this we first note that by Lemma 6.4.3 the equalities Uk(k/2 — 1,0)= Uk(k/2,1) = Vk(k/2 — 1,0) hold. Therefore, since u^ decreases with s and Vk increases with s we have Uk(k/1 — \,s) < Vk(k/1 — \,s). Similarly, by Lemma 6.4.3 and monotonicity U k ( k / 2 , 1 ) = Uk(k/2 — 1,0) = V k ( k / 2 , 1 ) , so that U k ( k j 1 , s ) > u&(&/2,1) = V k ( k / 2 , l ) > V k ( k / 2 , s ) . We have now shown that u/y(k/2— 1,5) < Vk(k/2— l,s) and Uk(kf2,s) > Vk(kf2.,s). It follows that PQ = k/2 — 1 when k is even. By the monotonicity of Uk and Vk we obtain fik,Tu(s) — ma,x{uk(k/2— 1, 5), Vk(k/2, s)}. Now using Lemma 6.4.3 and (6.17) we have vk(k/2,Q) < uk(k/2 - 1,0) and u(k/2 -!,!)< vk(k/1,1). The result for k even now follows. Suppose next that k is odd. We claim that
Let p = (k — l)/2. Using Lemma 6.4.3 we see that
By (6.17) and Lemma 6.4.3 (i) we find for 5 6 [0, \) that
and
Knot Insertion and Deletion Algorithms
167
for s £ [1, 1). It follows that
Next, again by monotonicity and Lemma 6.4.3 we see tha Also Therefore,
for ,s £ [0, 1). Combining this with (6.27) we obtain the required formula for po for k odd. Finally, if .s £ [0,~) then 6ktTu(s) - m&x{uk(p - l,s),vk(p,s)} = vk(p,s) = uk(p, 1 - 5), while if s G [^, 1) then 4,n;(^) = max{uk(p,s),vk(p + 1,5)} = uk(p,s}. This completes the proof of the proposition. D We also want to determine the growth rate of 6 k , T U ( s ) as a function of A;. The following result gives the answer. T H E O R E M 6.4.4. Suppose that a knot ,s £ [0, 1) is inserted in the uniform knot vector T\J. For k even, set m = (k — 2)/2. Then
For k odd, set rn = (k — l)/2. Then
Here,
The number Wm occurs frequently in classical mathematics. We have, for example,
The following lemma is needed for the proof of Theorem 6.4.4. L E M M A 6.4.5. For any positive integer m,
Proof. Manipulating U2 m +2(w,0) as given by (6.25) we find
168
Knot Insertion Conditioning
We operate similarly on U2m+i '•
Proof of Theorem 6.4.4. Suppose first that k is odd and that m = (k — 2)/2. In this case the graph of SkjTU(s) as a function of s has the shape as exemplified for k = 3 in Fig. 6.1. It follows from Proposition 6.4.2 that
This proves the two middle inequalities in (6.29). The leftmost equality in (6.29) follows from Lemma 6.4.5. It remains to prove the rightmost inequality in (6.29). Now by Proposition 6.4.2,
For any real numbers a, 6, x with 0 < £ < a < 6 w e have the elementary inequality
This inequality implies that the sum in (6.30) increases if we add ^ to each of the linear factors in the sum. As a consequence of this and by Lemma 6.4.5, we find
This completes the proof of (6.29). Suppose next that k is even and that m = (k — l)/2. By Proposition 6.4.2 the shape of the graph of f>k,TU(s') is now as in Fig. 6.2. Using Proposition 6.4.2 and (6.17) we obtain the two weak inequalities in (6.28), and the rightmost equality follows from Proposition 6.4.2. It remains to show the leftmost inequality in (6.28). We have
Knot Insertion and Deletion Algorithms
169
Using (6.31) we get something smaller if we subtract | from each of the linear factors. Therefore,
The proof of (6.28) is now complete. We have now also virtually proved Theorem 6.4.1. Proof of Theorem 6.4.1. To prove the asymptotic estimate (6.22), we use Wallis's inequality, Combining this inequality with (6.28) and (6.29) we find
Dividing by \fk and taking limits we obtain (6.22). 6.5.
The Worst Case
In the last section we saw how large the condition number could be in the case of uniform knots. We like to know how large 8k,r(s} can possibly be. Before answering this we consider the Bernstein knots
Inserting a knot s in [0, 1), we see that for this knot vector all the Xj in (6.11) are identical,
Because of this the expressions for the functions /i, i^., v^, and H^ simplify and it will be convenient to use x as a variable. We therefore make the definitions
and
170
Knot Insertion Conditioning
By Proposition 6.3.1 the extremal spline /& written in the form
for the knot vector Tk can be
We start by giving the main result that we prove in this section. This result gives an almost complete characterization of 6^. . THEOREM 6.5.1. The discrete condition number S^, ' satisfies 6^ ' = ^ = 1. Moreover, for all polynomial orders k > 3
The proof is again via several lemmas and propositions. The first result shows that the Bernstein knots give the knot vector with the largest condition number for removing one knot. PROPOSITION 6.5.2. For any knot vector r and any inserted knot s, the condition number ^,r(5) i>s bounded by
Moreover, for the Bernstein knots r^ and s £ [0,1),
where x = s/(l — s). Hence, the knot independent condition number 6^. ' satisfies
Proof. From (6.12) and (6.13) we have
where Xj = Xj(s) is given by (6.11). We observe that the X{ are ordered as follows, (6.39) We see that for a fixed p the function h(p;x~1,.. . , X } 1 ) increases when one or more of zj, . . . , xp are decreased, while h(k — 1 — p; x p +i,.. .,2^-1) also increases when Zp+i, . . . , %k-\ are increased. Therefore, if we fix x p , then both bounds increase if we set all the Xi equal to x — xp. This results in
Taking the supremum over x in [0, oo) proves (6.36). The equality (6.37) follows from (6.35). Finally, since the mapping x(s) = s/(l — 5) maps [0,1) one-to-one and onto [0, oo) we obtain (6.38) by combining (6.36) and (6.37).
Knot Insertion and Deletion Algorithms
171
Our interest is now focused on the function g^. Clearly, our aim should be to determine the supremum of gk(p,x] for nonnegative real x and integer p in {0,1,..., k — 1}. However, it is much simpler to determine the supremum when p is allowed to be a real number since we can then make use of calculus. We therefore consider this problem first and will find that as a by-product we also determine the supremum over the integers in the case where k is odd.
FIG. 6.4. Contour plot of g$(p, x). LEMMA 6.5.3. The supremum ofgk(p,x)
with both p and x real is given by
and this value is attained at one and only one point, namely for (p*,x*) = ((*-l)/2,l). Proof. For x = 1 we find
Thus gk(p, 1) < k and gk((k — l)/2,1) = k. For the general case we will first determine fk(x) = max p gk(p, x) and then determine max^/^x). Note that 9k(p-,®) — 1 f°r aH P so that the supremum is not attained for x = 0.
172
Knot Insertion Conditioning Now let x be a fixed positive real number. Then we have
for all p G [0, k — 1]. We also have
for both y — x and y = l/x. This means that the equation
can be solved uniquely for p since hx(0) < 0 and hx(k — 1) > 0 and also h'x(p] > 0 for 0 < p < k — 1. This value of p which we call Pk(x] must lead to gk(pi x) having its maximum value for fixed values of x and k. This is because by increasing p we decrease q^p-, #), while decreasing p leads to a decrease in q\(p, x). Thus gi-(p,x) is decreased by moving p away from Pk(x). By simple algebra one finds from (6.40) that
FIG. 6.5. For y fixed the two functions
FIG. 6.6. The function p$(x}.
Knot Insertion and Deletion Algorithms
173
The function p k ( x ) gives a curve in the xp-plane which is then mapped onto a curve on the surface g^(p^x}. By construction, this curve passes through the maximum of the function g^. This curve is given by the function
To determine the maximum of /^ we compute its derivative as
From this it is easy to see that /( is positive for x E (0,1), it is negative for x E (l,oo), and /£(!) = 0. This means that there is a global maximum at x — 1 for which we find
The uniqueness of the maximum follows from the facts that for fixed k and x, we determined Pk(x) such that gk(pk(x},x] is a unique maximum for g k ( p , x ) . We also saw that ff, had a unique maximum from which we conclude that g^ also has a unique maximum. D Figure 6.8 shows a plot of the spline fk j / 2 (it is of course a polynomial) in the case k = 5 together with the coefficents relative to both knot vectors. For even &, we only have the bound <*>[, < k. In this case the maximum of gk(p,%] over integer values of p is strictly smaller than the maximum over real p since (k — l ) / 2 is not an integer. The next lemma gives a characterization of the maximum. LEMMA 6.5.4. For even k set m = k/l. The condition number 6k is given by
174
Knot Insertion Conditioning
FIG. 6.8. The spline f£ and its B-spline coefficients r 5 U{l/2) (grey).
relative to r5 (black) and
where x*k is the real solution to the equation
and s^ = x^/(l -f #£). Equivalently, the number x^ is the real solution to
which satisfies x*k > 1. Proof. We will follow a line of argument similar to that in the proof of Lemma 6.5.3, except that the roles of p and x will be reversed. From Proposition 6.5.2 we know that tffc)T(s) is given by taking the supremum of 9k(p->x) over the set (0,1,..., k — 1} X [0, oo). In Lemma 6.5.3 we found that if we let p vary continuously in [0,fc — 1], then the maximum of g^(p^x) is attained at the point ((k — l)/2,1). The idea of this proof is basically to show that the maximum over the integers must be taken on for the p that is closest to (k- l)/2. To begin with, we may think of p as a real. We first notice that g(Q,x) = g(k — 1,#) = 1. It is therefore sufficient to consider p £ (0, k — 1). Let p be a given number in this range. Then q(p, l/x) decreases with increasing x while q(k — 1 — p, x) increases with x. This means that (6.40) can be solved uniquely for x yielding a value x^(p). In the proof of Lemma 6.5.3 an x in (0,oo) was given, and we then solved (6.40) uniquely for a number Pk(x) that also had the property that max p ^(j^^) = 9k(pk($},p)- By a similar argument we must have maxx^(j9, x) = 9k(p->xk(p)}- Now the functions Xk and pit must be inverses of each other; they must therefore both be strictly monotone (on (0, oo) and (0, k— 1), respectively), and it is easily checked that both functions are in fact strictly increasing. But this means that the function fk(p) = 9k(p,Xk(p)) is just a reparametrization of fk(x) and therefore must be convex like fa. Since we know that fa assumes its unique maximum at
Knot Insertion and Deletion Algorithms
175
p = (k — l)/2 for real p, it must assume its maximum at the integer closest to (k — l)/2 when p is restricted to the integers. For even k we therefore have
But since gk(p, x) — g(k — I — p, 1/z) this reduces to (6.41). The two characterizations (6.42) and (6.43) of the number x*k are consequences of the equation x*k = x/^ra). The first one follows immediately from (m, l/x*k) = q(m— 1, z£) while (6.43) follows by summing the geometric series in (5.42). D There does not seem to be a simple closed expression for 8k for even k. On the other hand, the next lemma gives a good lower bound. LEMMA 6.5.5. Suppose that k > 2 is an even integer and set m = k/2. If s = l/(l + e~l/m2), then
Proof. We prove this by employing the spline / which on Tk has B-spline coefficients
where x = el'm . We claim that / = /^.)5, the extremal spline that was introduced in §3. To prove this it is sufficient to show that
Note that x > 1, so that (ra—1,1/z) < g(m, x) and thus the first m coefficients have the right value. If we can show that (ra, 1/x) > q(m— 1, x), it will follow that the last m coefficents also have the right value. The required inequality is equivalent to
Summing the geometric series, this is equivalent to
Using Taylor expansions, we have
while
176
Knot Insertion Conditioning
Since
we see that the condition (6.44) is indeed satisfied, even with strict inequality. Thus we have / = fk,x and hence &k,Tk(s) — c m |, since this is the coefficient with largest absolute value. We have
with equality only if m = 1. To obtain the inequality, we have included two terms in the Taylor series for the exponential function. The bound could clearly be improved by including more terms. That k > 6k follows from the unicity of the maximum of <%, and the inequality tfj. ' > t>k,Tk,Tku{s} follows since x ^ x*k. With this we have completed the proof of Theorem 6.5.1. A few comments are in order. Since we only have bounded 6k from below and above in the case of even &, there is room for improvement. Let us first consider the cubic case in detail. The following result is immediate from Lemma 6.5.4. PROPOSITION 6.5.6. The number x\ has the value
while
and Figure 6.9 shows a plot of the extremal cubic spline /4 )X *. We notice that all the coefficients relative to the refined knot vector have unit magnitude. For comparison we have also included a plot (Fig. 6.10) of /4)S with s = \/(l-\-e~l/m ). For this spline 63 < 1, indicating that s is not placed optimally. This is one of the two approximations we did that enabled us to give a lower bound for 8\, '. For k > 4 equation (6.43) does not appear to have a simple solution, but we can always find the solution numerically and thereby determine j. '. The result of doing this is shown in Table 1. The other possibility is to determine a series expansion for the condition number as a function of k. Lemma 6.5.4 gives a suitable characterization of d[
177
Knot Insertion and Deletion Algorithms
FIG. 6.9. The spline /4 )5 * with its two control polygons.
FIG. 6.10. The spline / 4)S with s given by s = 1/(1 + e
1/m
TABLE 6.1 Numerical values for the first even condition numbers.
'
178
Knot Insertion Conditioning
for even k. Using a program like Mathematica we can find a series expansion for x*k in terms of k~l from (6.43). The result is
From this it is possible to determine a series for S\. ' which gives
From this we see that the lower bound k — 2,/k given in Theorem 6.5.1 is a reasonable compromise between accuracy and simplicity. References [I] [2] [3] [4] [5] [6] [7] [8] [9] [10] [II] [12] [13]
W. Boehm, Inserting new knots into B-spline curves, Comput. Aided Design, 12 (1980), pp. 199-201. C. de Boor, On local linear functionals which vanish at all E-splines but one, in Theory of Approximation with Applications, A. G. Law and B. N. Sahney, eds., Academic Press, New York, 1976, pp. 120-145. , The exact condition number of the B-spline basis may be hard to determine, J. Approx. Theory, 60 (1990), pp. 344-359. E. Cohen and L. L. Schumaker, Rates of convergence of control polygons, Comput. Aided Geom. Design, 2 (1985), pp. 229-235. W. Dahmen, Subdivision algorithms converge quadratically, J. Comp. Appl. Math., 16 (1986), pp. 145-158. S. Demko, On the existence of interpolating projections onto spline spaces, J. Approx. Theory, 43 (1985), pp. 151-156. G. Farm, G. Rein, N. Sapidis, and A. J. Worsey, Fairing cubic B-spline curves, Comput. Aided Geom. Design, 4 (1987), pp. 91-103. T. Lyche, A note on the condition number of the B-spline basis, J. Approx. Theory, 22 (1978), pp. 202-205. , Discrete B-splines and conversion problems, in Computation of Curves and Surfaces, W. Dahmen, M. Gasca, and C. A. Micchelli, eds., Kluwer, Dordrecht, the Netherlands, 1990, pp. 117-134. T. Lyche and K. M0rken, A data reduction strategy for splines with applications to the approximation of functions and data, IMA J. Numer. Anal., 8 (1988), pp. 185-208. K. M0rken, On two topics in spline theory: Discrete splines and the equioscillating spline, Cand. Real, thesis, University of Oslo, Oslo, Norway, 1984. , Contributions to the theory and applications of splines, Ph.D. dissertation, University of Oslo, Oslo, Norway, 1989. J. H. Wilkinson, The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965.
CHAPTER
7
An Envelope Approach to a Sketching Editor for Hierarchical Free-form Curve Design and Modification Michael J. Banks, Elaine Cohen, and Timothy I. Mueller
7.1.
Introduction
Natural and intuitive curve design and modification are the goals for many interactive specification schemes. It is desirable that the process supports an initial rough specification followed by hierarchical modifications to add details and refinement to the design. This is known as hierarchical design. There are many approaches to this problem, even within the framework of nonuniform B-spline representation. Methods based on traditional interpolation specify points to interpolate and then create a spline, usually using the complete spline interpolation method. If the data is well behaved, such methods create a reasonable curve, although such methods are not feasible for embedding local modifications. Other interpolation methods require either specification or estimation of tangents at each point and use piecewise cubic Hermite interpolation to create a curve. This basic strategy can also be used for hierarchical modification, but the user must either specify quite a bit of additional information to get the desired shape or rely on the system to estimate tangent values. Other times the user is allowed to "sketch" on the screen and data is captured by frequently sampling the position of an input device while in motion. This results in a large amount of data that must be represented in a compact way. Piecewise Bezier curves, piecewise G1 Hermite curves, or B-spline curves are candidates for such representations [1], [2]. When B-splines are used for curve design, the control polygon method can be used for specification and later editing. Using this method, the points of the control polygon are interactively specified, and then the initial curve is drawn. Editing and modification of these curves is based upon three basic operations: adding, moving, and deleting points from the control polygon. There are a number of problems and special cases that have forced most implementations to adopt some sort of unwieldy modal or menu-driven style of interaction. Namely, when adding a control vertex, the user must specify not only the position of the vertex, but also where to put it in the control polygon. Often 179
180
Knot Insertion and Deletion Algorithms
FIG. 7.1.
Envelope sketching.
this takes the form of separate modes, such as add after or add before to insert the new vertex before or after the current vertex. In addition, many editors provide a mode that lets the user add a number of new vertices between two existing vertices. Similar problems do not arise for moving or deleting vertices, but the move and delete operations are usually separate modes as well. Another form of adding degrees of freedom that supports hierarchical design lets the user refine the curve [3]; that is, the identical curve is defined over a more complex B-spline basis. The user can then modify the new control points of the curve using the editing methods previously described. Putting all these together means that users are forced to change their method of interaction many times during a single editing session. We have experimented with an approach, the envelope approach, in which we have polygon editing and hierarchical design within a single editor and which mimics the natural actions used by humans in sketching. The knot vector is determined based on considerations of local effect, spline geometry, and the hand motion creating the envelope stroke. That is, the editor refines or reduces the underlying knot vector in a manner consistent with the editing operations being carried out. 7.2. The Envelope Approach One common manner of sketching with pencil and paper is to draw a number of short straight line segments to represent curves in a sketch, as in Fig. 7.1. Since control polygons are composed of a number of connected line segments, this type of sketching provides a natural analogy for the design of an algorithm. Some of the characteristics of this type of sketching are: Groups of small line segments are sketched following the same direction in order to modify or extend the sketched curve. Thus, the operations to create the curve have some notion of directionality.
An Envelope Sketching Editor
181
Segments drawn in this way usually overlap. The intent of the overlap usually means that the previous line should be cut or extended in some fashion. For example, if the overlap occurs at the end of the curve in the direction of sketching, the curve is usually being extended further in that direction. However, if the overlap occurs somewhere in the middle of the curve, it usually signifies some sort of modification to the existing lines, and, in fact, may mean a local modification to the curve, or even mean to truncate the curve. Each segment can be thought of as specifying a tangent to the final sketched curve. As the number of sketched lines increases, the resulting curve is more and more fully specified. Some segments are outside and do not intersect any of the others. If they are far away, they should be interpreted as being erroneous and hence to be ignored. If they are nearby and are relatively parallel, but do not intersect a segment, it is probably meant that they should be used instead of the nearby segment. 7.3.
Curve Sketching Using Envelopes
Inherent in paper-and-pencil sketching is the idea that the results are inexact. In fact, when viewing such a sketch, we even rely on cues given by the lightness or darkness of the sketched lines. In this method, when a curve is created, we shall derive a B-spline control polygon from the sketched lines. Unfortunately, for constructing a control polygon we must somehow derive exact values to serve as the control vertices. As a result, each distinct case of a new stroke crossing an existing control polygon is considered to provide, as nearly as possible, a natural correspondence with the paper-and-pencil analogy. The technique can be decomposed into five separate cases to be handled. Several of the cases rely on the concept of a segment shadow to determine whether or not to insert new control vertices and allow for resketching a local region of the curve, or to simply reposition existing vertices. For a given control polygon segment, a line can be constructed perpendicular to the segment through each of its end points. The area between these two (parallel) lines is the segment shadow. When a stroke intersects a control polygon segment, the head (end point) of the stroke is tested to see if it falls inside the shadow (Fig. 7.2), or if it is on one side (Fig. 7.3) or the other (Fig. 7.4). 7.3.1. Actions Based on a Single Stroke. We discuss the action to be taken in modifying the control polygon for each of the separate cases below, and in a later section discuss knot vector and order specification. 1. The first case occurs when the new stroke does not intersect the control polygon at all. In this instance, the new stroke is checked to find the closest distances to a whole segment of the existing polygon. If that distance measure is large, the stroke is completely ignored. It is assumed
Knot Insertion and Deletion Algorithms
182
FIG
7.2.
F10.7.3.
FIG.7.4.
Ins.de segment shadow.
Outs,dc left segment shadow.
O.** »»*' «»me"' ^"m
An Envelope Sketching Editor
183
that if the new stroke did not somehow touch the previous strokes, that it is a "miss" and should not be considered (Fig. 7.5). If it is close, the vertices which are close to the stroke are projected onto the stroke. Only the first and last are kept. 2. The second case is one in which the new stroke intersects the existing polygon exactly once and it is inside the segment shadow. The result of this action is to insert a new control vertex between the vertices of the intersected polygon segment. The position of the new vertex is that of the head of the new stroke (Fig. 7.6). This motion inserts a control point with a single stroke. 3. A third case is where the new stroke crosses the existing polygon exactly once, does so in either the first or the last segment of the polygon, is outside the polygon segment shadow, and its direction is towards the polygon end point. The result is to extend the polygon in the direction of the stroke. This is done by breaking the polygon at the point of intersection with the stroke, adding this point as a new vertex, and adding the head of the new stroke as the new polygon end point (Fig. 7.7). This action also extends the size of the polygon. 4. The next case applies to all other single crossings where the stroke is outside the segment shadow. At first sight, this case and the resulting effect should be quite different. One idea would be to make the end point of the polygon segment in the direction of the stroke be moved to the position of the head of the stroke and add the intersection point as well, and is shown in Fig. 7.8. This case is effectively the same as the previous case, except that the vertex being moved is not one of the end points. 5. The final case deals with multiple crossings. If the stroke crosses the polygon exactly twice, it seems quite intuitive to add the two intersection points to the control polygon and remove all the vertices of the polygon between the two new points (see Fig. 7.9). This provides a cutting operation for smoothing out peaks and corners in the polygon. However, it is not clear what should be done when the new stroke crosses the polygon more than twice. Consider the example in Fig. 7.10. If we extend the notion of cutting to multiple crossings, then these results seem valid. However, if we take an example with an odd number of intersections (see Fig. 7.11), it no longer seems that this is the right thing to do. Consequently, the multiple crossing case is handled in the same manner as the case of exactly two crossings; that is, the first and last points where the stroke intersects the control polygon are added, and the control vertices between them are deleted (see Fig. 7.12). This same strategy is also followed in the case of a stroke that does not intersect any polygon segments but is close to at least two points, even if they
184
Knot Insertion and Deletion Algorithms
FIG. 7.5. No polygon intersections.
FlG. 7.6. Single intersection—shadow inside.
FIG. 7.7. Single intersection at end.
FIG. 7.8.
Single intersection—shadow outside.
An Envelope Sketching Editor
185
are not on the same segment. Such a process allows for smoothing out accidental "dents" from previous operations. 7.3.2. Multiple Stroke Action. Consider the actions taken above when a stroke intersected a segment once and was outside of the segment shadow. If the polygon segment was an end segment and the stroke went in an outward direction, the polygon was extended. If the segment was internal, or an end segment with an inward pointing stroke, the polygon was modified. Suppose the designer wanted to continue this process. Figure 7.13 shows the result of a previous sketching session or a curve that has been read in after being created using a different design paradigm. In paper-and-pencil sketching if the designer wanted to modify a section of the curve they might create a sketch like the one shown in Fig. 7.14, where the polygon is from the original curve and the strokes are from the current effort. With pencil and paper the new strokes would be sketched with more pressure, or heavier lines, or perhaps be repeated to indicate that "these are the lines I want, not those." Such a session would not naturally take place with the actions previously described. We have therefore added a further multistroke option to detect the user's intent. Consider a stroke that intersects a single polygon segment and is outside the segment shadow. If the segment is an end segment and the stroke points out of the region, the polygon is extended as above. However, in all other cases, the editor now waits to see what the user does next. If the intent is the effect in Fig. 7.14, then the user will continue editing that area so th strokes intersect the "new" region first, with no intersections on the old polygon until the modification cycle is finished. A result of such a process is shown in Fig. 7.15. If it is intended to make just a small single stroke modification, then another segment will be crossed with the next stroke. At that point the earlier defined editing actions are carried out. In this way, whole regions of a curve can be modified hierarchically using the envelope paradigm. The results of the actions on the curve in Fig. 7.13 are shown in Fig.7.16. The original and fin are compared in Fig. 7.17. It is even possible to easily delete one end of a curve by cutting it off. The old unused section is cut off if the newly edited region does not intersect the curve a second time. 7.4.
Sketched Curve Representation
The above method is one that allows sketching of a control polygon for a curve rather than sketching the shape of the curve directly. Consequently, unless interpolation is used, the resulting curve does not directly include the input data points. In developing this method, the B-spline representation was used because the shape of the control polygon is similar to the shape of the resulting curve. This is especially true for lower-order B-spline curves. Figure 7.18 shows examples of uniformly parametrized curves for the same control polygon using quadratic, cubic, and quartic splines. Notice that the
186
Knot Insertion and Deletion Algorithms
FIG. 7.9.
FIG. 7.10.
Two intersections.
Multiple intersections—apparent solution.
FIG. 7.11. Multiple intersections—failing case.
FIG. 7.12.
Multiple intersections—actual solution.
FIG. 7.13.
Original input curve and polygon.
187
An Envelope Sketching Editor
FIG. 7.14.
FIG. 7.15.
Multistroke modification.
Derived polygon after editing.
quadratic formulation actually touches the control polygon at some point in each segment. In fact, for a quadratic curve with a uniform knot sequence, the point of contact is the midpoint of the corresponding polygon segment. For sketching curves, if higher-order continuity is not required, quadratic curves give a good approximation to actual pencil-and-paper sketching. As the order of the representation increases, the actual curves are further from the control polygon, and as a result, modification of the control polygon by sketching does not relay the same sense of direct control over the curve.
FlG. 7.16.
Derived polygon and resulting curve.
188
Knot Insertion and Deletion Algorithms
FIG. 7.17. Comparison of "before" and "after."
FIG. 7.18. Uniform curves with the same polygons: (a) quadratic, (b) cubic, (c) quartic.
7.5. Knot Vector Modification It is not only possible to create curves using the envelope method, but to edit curves which have been created by other methods or at other times. The method can be generally applied to modify control polygons, although, as pointed out above, the method is more intuitive when applied to lower-order curves. Now that we have discussed polygon modification, we will discuss knot vector modification. 7.5.1. Knot Vector Creation. In curve creation mode, there is no existing knot vector, so we are free to create the knot vector as the polygon is created. In this situation, there are two appropriate methods that we can use: uniform and chord length. In the uniform case knots are equally spaced. In the chord-length case the knot spacing is nonuniform and is used to approximate a chord-length parametrization for the curve. One way to approximate a chord-length parametrization is to set the knot spacing based on relative lengths of the segments of the control polygon. Each of these methods has an advantage for intuitive curve sketching. Chord-length approximations have the advantage of following the shape of the control polygon more closely when the lengths of individual polygon segments vary widely (some very long strokes and some very short strokes). However, this method has a global effect since we recompute the entire knot vector for every control polygon modification. Uniform knot vectors, on the other hand, have the advantage of allowing local changes. Control polygon modifications that change the number of control
An Envelope Sketching Editor
189
points do require a new knot vector; however, since all uniform basis functions are translates of each other, the changes in the curve will be localized to the modified area. We have experimented with both these methods and feel that it is more important to provide local control using uniform knot vectors and to give up the slightly better fit (in certain cases) of chord-length knot vectors. This applies to all single stroke actions when creating a curve from scratch. 7.5.2. Hierarchical Knot Vector Modification in Stroke Editing.
There are two cases where we do not use the simple technique of uniform knot vectors. These are (1) multistroke hierarchical modifications and (2) modifications of existing curves with nonuniform knot vectors. In these cases, using the envelope method to modify the polygon for the curve should cause knot vector changes which preserve the curve in its original form outside of a local region of the curve. We consider a method for editing in which control polygon and corresponding knot vector modifications exhibit reasonable locality and can be easily reversed in certain situations to cancel a previous modification (i.e., an "undo" mechanism can be provided). This method is described as follows: 1. Find the first and last intersections of the stroke with the control polygon. 2. Modify the polygon according to the rules discussed above. 3. Find the last unmodified control point before the first intersection, and the first unmodified control point after the last intersection. 4. If the curve is of order k with knot vector {t,}, then for each control point P;, the associated node is t* = (ti+\ + • • • + ^+A;-I)/(& — 1). For each of the points found in step 3, find its associated node and then the knot closest to that node value. If there are multiple knots with that value, then if the node value is smaller than the closest knot value, pick the knot with the smallest subscript. If the node value is larger than the closest knot value, pick the knot with the largest subscript. 5. Replace all the knots in the interval found in step 4, bounded by both value and subscript, with the correct number of knots. This depends on the size of the new polygon, which in turn depends on the actions caused by the previous stroke. The geometric location of the knots inserted in this interval can be either evenly spaced or based on some geometric factor, like a weighted chord-length scheme based on the relative chord lengths of the new control polygon sides in the modified region. Several issues influence the choice of whether to space the inserted knots evenly or to use a chord-length approximation, and favor the even spacing. First, chord-length schemes are complex, requiring square roots, but do not make substantial difference in the resulting curve. In fact, when the curve in a particular region is modified many times, the chord-length method can lead to relatively very short parametric regions which can expand into larger curve
190
Knot Insertion and Deletion Algorithms
FIG. 7.19.
Original curve, nonuniform knot vector.
lengths. In that case, the parametrization will not be well behaved. The even spacing allows for much more editing before such problems arise. Also, using the uniform spacing allows the user to undo an insertion and return to the original knot vector through a simple deletion. When editing invokes large modifications in certain regions, but it is necessary to keep the curve identical in other regions, it is sometimes necessary to change the parametrization region. That is, the relative parametric length and geometry of the local edited region should be compared to the parametric length and geometry of the static regions. They should be readjusted when they differ by too much. 7.6. A Hierarchical Session—An Example In Fig. 7.19 the cubic curve shown is to be edited. It has as a knot vector {0, 0, 0, 0, .3, .5, .65, .7, .75, .8, .9, 1, 1, 1, 1}. It is shown in Fig. 7.20 after a single stroke (inside segment shadow) has modified the polygon, and the knot vector has been modified to {0, 0, 0, 0, .3, .5, .65, .675, .7, .75, .8, .9, 1, 1, 1, 1}. The next stroke cuts a corner off and intersects twice, resulting in Fig. 7.21 with knot vector {0, 0, 0, 0, .3, .5, .65, .66667, .683333, .7, .75, .8, .9, 1, 1, 1, 1}. The last figure in this sequence, Fig. 7.22 shows the curve sometime later in the editing process, when a knob has been designed in. It has knot vector {0, 0, 0, 0, .3, .5, .65, .655556, .66667, .683333, .686111, .68889, .691667, .7, .75, .8, .9, 1, 1, 1, 1}. Notice that the curve follows the control polygon very closely, especially in the area of the local modification. Sketching with many strokes can result in a curve with many control points and knots. It might be useful to invoke a data reduction scheme [6], [1], [2] on the curve subsequent to sketching to automatically get a smooth curve which is very close to the sketched curve. One can even invoke data reduction on just a small region of the curve, keeping some regions exactly the same. 7.7. Nonsketching Operations The sketching approach as it stands allows all of the traditional operations on control polygons, i.e., there are ways to add, move, and delete vertices. However, in practice, people who have used other control polygon editing methods occasionally want to add, move, and delete vertices in the traditional
An Envelope Sketching Editor
FIG. 7.20.
Curve and polygon after the first stroke modification.
FIG. 7.21.
Curve after the second single stroke modification.
FIG. 7.22. Final curve with knob.
191
192
Knot Insertion and Deletion Algorithms
ways. To accommodate these users, several extensions have been made to the approach that do not entirely fit within the sketching analogy. First, all of the operations in the sketching algorithm are done using a stroke operation. However, some users prefer to specify the position of each control vertex by a locate operation, which merely appends the new vertex to the existing polygon at the end. So this operation was added in, but no "modes" were needed. Also, one of the most common operations in a traditional control polygon editor is to adjust the position of a control vertex slightly by dragging the point to a new position on the screen. This can be added as well by checking for button presses that fall directly on or near the control vertices. The new location for the vertex would be the position of the button release. Finally, deleting a specific control point while leaving the others untouched is difficult with the sketching approach. Consequently, deleting a given control point by grapical selection is a useful addition to the approach. 7.8.
Conclusions
The sketching method presented here imitates certain operations that people have traditionally used with paper and pencil. The method chosen to support local knot vector modification is used for all the sketching operations, both additive and subtractive, as well as simple modifications, and control point deletion (nonsketching). This method for editing works well in combination with the data reduction sketching approaches discussed in [6], [1], [2], for realizing a shape from a rough drawing. The method as originally developed seemed to be more useful as a "cutting" technique. While it made it easier to insert single points into the control polygon, and easy to delete regions of the curve, it did not make it straightforward to modify or resketch whole regions of the curve at a single time. Later addition of the ability to resketch a whole subregion of the curve seems to have enhanced its utility. Much of the effect of a pencil-and-paper sketch is obtained through using pressure to indicate which of several possible strokes is meant to be used. We look forward to investigating the use of a pressure sensitive input device to see whether computer sketching can simulate this effect. Acknowledgments We would like to thank the Department of Computer Science at the University of Utah, and the Alpha_l Computer-Aided Geometric Design Group for the excellent research environment. This work was supported in part by DARPA grant N00014-88-K-0689. All opinions, findings, conclusions or recommendations expressed in this document are those of the authors and do not necessarily reflect the views of the sponsoring agencies.
An Envelope Sketching Editor
193
References [1] M. J. Banks, A User Interface Model and Tools for Geometric Design, Master's thesis, University of Utah, Salt Lake City, UT, 1989. [2] M. J. Banks and E. Cohen, Realtime spline curves from interactively sketched data, in Proceedings of the 1990 Symposium on Interactive 3D Graphics, ACM SIGGRAPH, March 1990. [3] E. Cohen, T. Lyche, and R. F. Riesenfeld, Discrete B-splines and subdivision techniques in computer-aided geometric design and computer graphics, Comput. Graphics Image Processing, 2 (1980), pp. 87-111. [4] M. G. Cox, Curve fitting with piecewise polynomials, J. Inst. Math. Applic., 8 (1971), pp. 36-52. [5] H. Jansen, E. Nullmeier, and K. Roediger, Handsketching as a human factor aspect in graphical interaction, Computers and Graphics, 1 (1985), pp. 195-210. [6] T. Lyche and K. Morken, Knot removal for parametric B-spline curves and surfaces, Comput. Aided Geom. Design, 4 (1987), pp. 11-27. [7] C. F. O'Dell, Approximating Data with Parametric E-splines for Computer Aided Design, Master's thesis, University of Utah, Salt Lake City, UT, 1985. [8] M. Plass and M. Stone, Curve-fitting with piecewise parametric cubics, Comput. Graphics, 3 (1983), pp. 229-239. [9] J. R. Rice, Algorithm 525: ADAPT, adaptive smooth curve fitting (E2), ACM Trans. Math. Software, 1 (1978), pp. 82-94. [10] P. J. Schneider, Phoenix: An Interactive Curve Design System Based on the Automatic Fitting of Hand-Sketched Curves, Master's thesis, University of Washington, Seattle, WA, 1988. [11] C. Vercken, C. Potier, and S. Vignes, Spline curve fitting for an interactive design environment, Proceedings of Theoretical Foundations of Computer Graphics and CAD, to appear.
This page intentionally left blank
Index
approximation, 187
Chebyshev spline, 149 discrete, 162 of order fc, 162 condition number, 156 for splines, 157, 158 Cox-de Boor-Mansfield recurrence, 14, 15, 46, 112 curve design, 179 curves, 187
backward error analysis, 155 Bernstein basis, 46, 48, 52, 53, 60, Bernstein-Bezier form, 145 piecewise, 145 Bernstein knots, 147, 169, 170 Bezier to B-spline conversion, 145 Bezier curves, 4-6, 48, 53, 54, 60 Bezier form, 23, 49, 54, 95 blossom, multiaffine, 34, 91 multilinear, 7, 9, 30, 32-34, 36, 70, 73, 74, 91 Boehm's derivative algorithm, 40, 61, 77, 89 Boehm's knot insertion algorithm, 19, 23, 27, 28, 35, 39, 52, 54, 59, 73, 77, 78, 86, 89, 93, 94, 102, 106, 116, 119-123, 126-129 factored version, 109 B-splines discrete, 112, 114, 139, 158 nonuniform, 179 Schoenberg-normalized, 112 B-spline curves, rational, 84 B-spline surfaces, 84
de Boor's algorithm, 6-8, 1214, 16, 48, 49, 54, 59, 84 evaluation, 59 homogeneous, 30-32, 34, 37, 39, 43, 53, 56, 58, 59 de Boor-Fix dual functional, 28, 29, 35, 48, 52, 53, 56, 59, 61, 62, 92
dual linear functionals, 135, 142 de Casteljau algorithm, 4-6, 48, 53, 54 derivatives, 9, 10, 36-42, 54, 57, 58 design hierarchical, 179 diagonal property, 2, 9, 70, 91 differentiation, 36-39, 41, 44, 77 two-term, 44, 59 for B-spline curves, 38 discrete B-splines, 112, 114, 139, 158 195
Index
196 discrete splines, 162 divided difference, 57, 59, 65, 74, 75, 90, 106 division triangle, 106, 107, 109 dual functional property, 3, 9, 70 dual functionals, 18, 35, 57 dual linear functionals, 135 face lone, 96 veer, 96 factored knot insertion, 67-71, 105, 106, 131 algorithm, 69-71 fast forward differencing, 65, 84 fki (factored knot insertion) triangles, 106, 109, 110, 121 forward differencing, 75 adaptive, 86 generalized Oslo algorithm, 135 stability of, 150 geometrically continuous splines, 62 Goldman variant, 110, 116, 119, 124-129, 131 Helmite basis, 62 homogeneous de Boor algorithm, 30-32, 34, 37, 39, 43, 53, 56, 58, 59 homogeneous knots, 30-32 homogeneous polynomial, 70 homogenization, 30, 31, 32, 61 Homer's method, 84 in-out property, 13, 46 integration, 44, 46, 47, 59, 61 intersection of spline spaces, 137 knot insertion matrix, 158, 160 knot insertion algorithm
matrix version, 111, 112, 114, 115, 121, 124, 126, 127, 129, 131 knots affine, 32 homogeneous, 30-32 multiple, 84 uniform, 169 vector-valued, 32 knot vector modification, 188 knot vectors chord-length, 189 nonuniform, 189, 190 uniform, 189 Lagrange basis, 52, 60 Leibniz's rule, 59 lone face, 96 Marsden's identity, 28, 29, 35, 52, 53, 56, 59, 61, 62 discrete version, 141 matrix version of the knot insertion algorithm, 111, 112, 114, 115, 121, 124, 126, 127, 129, 131 modified Goldman algorithm, 102, 103, 116, 119, 122, 123, 125-129, 131 monomial basis, 52, 54, 55, 58, 60, 61 Morken's algorithm, 126 multiaffine blossom, 91 multiaffine property, 2, 15, 91 multilinear blossom, 7, 9, 32, 34-36, 70, 73, 74, 91 multilinear property, 7, 33, 70, 91 nested multiplication, 65, 66, 75, 76, 84 Newton basis, 56, 57, 59, 75, 76 Newton dual basis, 55-57, 59, 61 Newton form, 58, 65
Index numerical stability, 145 Oslo algorithm, 20-23, 27, 28, 35, 52, 54, 59, 77, 81, 82, 86, 89, 90, 93-96, 98-100, 112, 116, 119, 120, 122124, 126-129, 135, 139 generalized, 135 generalized, stability of, 150 Goldman variant, 20-23, 99-101, 106, 116, 119, 122129, 131 improved, 90, 98, 99, 116, 119, 120, 124 Oslo triangle, 93, 96, 98, 100, 102, 104, 112, 113 Pascal's triangle, 54 Polya basis, 29, 35, 48, 53, 60 power basis, 49, 50, 52, 60, 61 progressive basis, 14, 61 progressive curve, 13, 14, 60 homogeneous, 32 progressive sequence, 13 homogeneous, 32 progressive values, 16 projective geometry, 46 Ramshaw's blossoming algorithm, 14, 18, 35, 58, 59 recursive evaluation algorithm, 17 for symmetric multiaffine polynomials, 16 for symmetric multilinear polynomials, 34, 35, "rewritten" versions, 105, 121 Sablonniere's algorithm, 23-25, 27, 28, 35, 39, 49, 52, 54, 61, 90, 93, 95, 96, 97, 99, 102, 104, 116, 120, 122, 123, 130 improved, 102
197
recursive, 104, 121-123 Schoenberg-normalized B-splines, 112 sketching, 180, 181 splines Chebyshev, 149 extremal cubic spline, 176 geometrically continuous, 63 multivariate, 62 "standard" versions of an algorithm, 105 sum of spline spaces, 137 symmetric sum, 15 symmetry property, 2, 9, 15, 70, 91 synthetic division, 54 Taylor's theorem, 54, 59 tensor product surface, 118, 119, 127 triangles division, 106, 107, 109 fki, 106, 108, 109 Oslo, 93, 96, 98, 100, 102, 104, 106, 112 two-term differentiation formula, 44, 59 for B-spline curves, 38 uniform knots, 169 variation diminishing property, 89 veer face, 96