^{1}

^{*}

^{2}

^{1}

^{2}

Edited by: Luca Marchetti, Microsoft Research-University of Trento Centre for Computational and Systems Biology(COSBI), Italy

Reviewed by: Giovanni Mascali, Università della Calabria, Italy; Vincenzo Bonnici, Università degli Studi di Verona, Italy

This article was submitted to Optimization, a section of the journal Frontiers in Applied Mathematics and Statistics

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

We derive a variational model to fit a composite Bézier curve to a set of data points on a Riemannian manifold. The resulting curve is obtained in such a way that its mean squared acceleration is minimal in addition to remaining close the data points. We approximate the acceleration by discretizing the squared second order derivative along the curve. We derive a closed-form, numerically stable and efficient algorithm to compute the gradient of a Bézier curve on manifolds with respect to its control points, expressed as a concatenation of so-called adjoint Jacobi fields. Several examples illustrate the capabilities and validity of this approach both for interpolation and approximation. The examples also illustrate that the approach outperforms previous works tackling this problem.

This papers addresses the problem of fitting a smooth curve to data points _{0}, …, _{n} lying on a Riemannian manifold _{0}, …, _{n}. The curve strikes a balance between a data proximity constraint and a smoothing regularization constraint.

Several applications motivate this problem in engineering and the sciences. For instance, curve fitting is of high interest in projection-based model order reduction of one-dimensional dynamical systems [

There exists different approaches to tackle the curve fitting problem. Among others, we name here the subdivision schemes approach [

where Γ is an admissible space of curves _{γ(t)} denotes the Riemannian metric at γ(

Optimization on manifolds has gained a lot of interest this last decade, starting with the textbook [

The curve fitting problem (1), has been tackled differently the past few years. Samir et al. [

A recent topic concerns curve fitting by means of Bézier curves. In that approach, the search space Γ is reduced to as set of composite Bézier curves. Those are a very versatile tool to model smooth curves and surfaces for real- and vector-valued discrete data points (see [

In this work, we derive a gradient descent algorithm to compute a differentiable composite Bézier curve

The paper is organized as follows. We introduce the necessary preliminaries—Bézier curves, Riemannian manifolds and Riemannian second order finite differences— in section 2. In section 3 we derive the gradient of the discretized mean squared acceleration of the composite Bézier curve with respect to its control points, and thus of the regularizer of (1). In section 4, we present the corresponding gradient descent algorithm, as well as an efficient gradient evaluation method, to solve (1) for different values of λ. The limit case where λ → ∞ is studied as well. Finally, in section 5, we validate, analyze and illustrate the performance of the algorithm for several numerical examples on the sphere ^{2} and on the special orthogonal group SO(3). We also compare our solution to existing Bézier fitting methods. A conclusion is given in section 6.

Consider the Euclidean space ℝ^{m}. A

where _{1} is just the line segment (1−_{0}+_{1} connecting the two control points _{0} and _{1}. The explicit formulae of the quadratic and cubic Bézier curves read

for given control points

A ^{m} composed of

where _{i}∈ℕ denotes the degree of the _{Ki} of _{i}, are its control points. Furthermore, _{i} as the point at the junction of two consecutive Bézier segments, i.e.,

^{1} conditions for composite cubic Bézier curves). ^{1}

Schematic representation of the composite cubic Bézier curve

We consider a complete

We denote by _{a} denotes the inner product in the tangent space _{a}∈ℝ the maximal radius such that the exponential map is bijective on _{x} at

In the following we assume, that both the exponential and the logarithmic map are available for the manifold and that they are computationally not too expensive to evaluate. Furthermore, we assume that the manifold is symmetric.

One well-known way to generalize Bézier curves to a Riemannian manifold

Consider

as the

The De Casteljau algorithm is illustrated on Figure _{3}(_{0}, _{1}, _{2}, _{3}). The general cubic Bézier curve can be explicitly expressed on a manifold

Construction of a cubic Bézier curve via the De Casteljau algorithm.

The conditions of continuity and differentiability are generalized to manifolds in Popiel and Noakes [

The composite Bézier curve _{i},

^{1}

A composite cubic Bézier curve ^{2}. The end points _{i},

We discretize the mean squared acceleration (MSA) of a curve

i.e., the regularizer from (1). We approximate the squared norm of the second (covariant) derivative by the second order absolute finite difference introduced by Bačák et al. [

for all (not necessarily shortest) geodesics

This definition is equivalent, on the Euclidean space, to

Using equispaced points _{0}, …, _{N},

For Bézier curves γ(

In order to minimize the discretized MSA _{b}

We first introduce the following notation. We denote by _{0}, with respect to its argument _{x}_{x}

We now state the two following definitions, which are crucial for the rest of this section.

For multivariate functions _{0}, _{0}) by writing

_{x}

The remainder of this section is organized in four parts. We first recall the theory on Jacobi fields in section 3.1 and their relation to the differential of geodesics (with respect to start and end point). In section 3.2, we apply the chain rule to the composition of two geodesics, which appears within the De Casteljau algorithm. We use this result to build an algorithmic derivation of the differential of a general Bézier curve on manifolds with respect to its control points (section 3.3). We extend the result to composite Bézier curves in section 3.4, including their constraints on junction points _{i} to enforce the ^{1} condition (4), and finally gather these results to state the gradient

In the following, we introduce a closed form of the differential _{x}

As represented in Figure _{x, ξ}, the geodesic starting in γ_{x, ξ}(0) = _{γx, ξ(s)}_{g, ξ}(

where ε>0. The corresponding Jacobi field _{g, ξ} along

that represents the direction of the displacement of

Schematic representation of the variation Γ_{g, ξ}(

We directly obtain _{g, ξ}(0) = ξ, and _{g, ξ}(1) = 0 as well as _{g, ξ}(_{x, ξ}(

_{1}, …ξ_{m}} be an orthonormal basis (ONB) of _{ℓ}, ℓ = 1, …, m. For details, see Ch. 4.2 and 5 (Ex. 5) of [_{1}(t), …, Ξ_{m}(t)} the parallel transported frame of {ξ_{1}, …, ξ_{m}} along g. Decomposing _{x}g[η] becomes

_{ℓ}

_{g} =

The Jacobi field of the reversed geodesic ḡ(_{y}_{ℓ}] = _{y}_{ℓ}] = _{ḡ,ξℓ}(1−

Note that

Let _{1}(_{2}(_{1}(

and by (10), we obtain

where the variation direction used in the Jacobi field is now the derivative of _{1}(_{3}(_{1}(

Note that the Jacobi field is reversed here, but that its variation direction is the same as the one of the Jacobi field introduced for _{2}(_{3} and ḡ_{3} Furthermore, in this case, the variation direction is also computed by a Jacobi field since _{x}_{1}(_{g1, η}(

Finally the derivative of _{2} (resp. _{3}) on symmetric spaces is obtained as follows. Let _{1}, and _{2} (resp. _{3}). As _{2} (resp. _{3}) with respect to

and accordingly for _{3}.

Sections 3.1 and 3.2 introduced the necessary concepts to compute the derivative of a general Bézier curve β_{K}(_{0}, …, _{K}), as described in Equation (3), with respect to its control points _{j}. For readability of the recursive structure investigated in the following, we introduce a slightly simpler notation and the following setting.

Let _{K}(_{0}, …, _{K}) with the control points

for the ^{th} Bézier curve of degree

Furthermore, given _{i}, …, _{i+k}}, we denote by

its derivative with respect to one of its control points

_{j}, _{i} and zero otherwise.

_{j},

Proof. Let fix _{j}, _{i}, …, _{i+k} and is a Bézier curve of degree _{i+k}, and the latter is independent of _{i}.

We prove the claim by induction. For

For _{x}

Consider the first term _{a}_{x}

For _{x}_{a}_{x}_{i+k}.

We proof the second term similarly. For

Finally, as _{x}_{b}_{x}_{i}, the assumption follows.

□

Figure

Schematic representation of the cases where elements compose the chained derivative of the _{i}, _{i+1}, …, _{i+k+1}}, and _{i+k}.

_{2} _{0}

_{2} _{2}

_{b1}β_{2}[η], _{1}

_{3} _{0} _{3} _{b1}β_{3}[η]

_{2}(_{0}, _{1}, _{2}) _{2}(_{1}, _{2}, _{3})_{1}

Construction and derivation tree of a Bézier curve β_{3}(_{0}, _{1}, _{2}, _{3}). The derivative with respect to a variable _{i} is obtained by a recursion of Jacobi fields added at each leaf of the tree. _{1} within the tree.

_{b2}β_{3}[η] _{1}.

Note that computing the Jacobi fields involved just follows the same decomposition tree as the De Casteljau algorithm (Figure 6A).

In this subsection we derive the differential of a composite Bézier curve ^{1} conditions into account. We simplify the notations from section 2.1 and set the degree fixed to _{i} = _{0} and _{n} the start and end points, respectively. For ease of notation we denote by ^{1}) condition investigation, cf. Figure

One possibility to enforce the ^{1} condition (4) is to include it into the composite Bézier curve by replacing

This way both the directional derivatives of _{i} change due to a further (most inner) chain rule.

^{1} condition).

Proof. Both first cases are from the derivation of a Bézier curve as before, for both second cases replacing _{i} additional) term.

We now derive the gradient of the objective function (7). We introduce the abbreviation

_{1}, …, ξ_{m}} _{0}, …, _{N}

_{ℓ}: = _{ℓ}(

Therefore, for any tangential vector

By definition of

We compute

which, by Definition 4, again becomes

The term on the left of the inner product is given in Bačák et al. [_{x}_{j}[η] can be written as

Hence, we obtain

and by (17), (18), and (19), it follows

which yields the assertion (16).

The fitting problem has been tackled different ways this last decades. The approach with Bézier curves is more recent, and we refer to Absil et al. [

In this section, we present the numerical framework we use in order to fit a composite Bézier curve _{0}, …, _{n}, such that we meet (1). For the sake of simplicity, we limit the study to the case where _{i} =

where _{i} and _{i}, as _{i}) = _{i}.

The section is divided in three parts: the product manifold

Let us clarify the set _{B} is the set of the _{i} = _{i} is imposed as well.

For a given composite Bézier curve ^{1} conditions (4), the segments are determined by the points

We investigate the length of M. First, _{i} and _{i−1} since the value also occurs in the segment ^{1} constraints. The first segment is thus composed of

Minimizing

_{0}, …, _{n}. Equation (20) reads

where λ∈ℝ^{+} sets the priority to either the data term (large λ) or the mean squared acceleration (small λ) within the minimization. The gradient of the data term is given in Karcher [

_{i−1} and end point _{i} of the segments _{i−1} and

Since the _{i} are fixed by constraint, they can be omitted from the vector

Since there are

In the Euclidean space ℝ^{m}, the adjoint operator ^{*} of a linear bounded operator ^{m} → ℝ^{q} is the operator fulfilling

The same can be defined for a linear operator

We are interested in the case where _{x} of a geodesic

where α_{ℓ} are the coefficients of the Jacobi field (11), and ξ_{ℓ}, Ξ_{ℓ}(

Hence the adjoint differential is given by

We introduce the

Note that evaluating the adjoint Jacobi field ^{*} involves the same transported frame {Ξ_{1}(_{m}(_{ℓ} as the Jacobi field

The adjoint ^{*} of the differential is useful in particular, when computing the gradient

Especially for the evaluation of the gradient of the composite function

The main advantage of this technique appears in the case of composite functions, i.e., of the form _{1}°_{2} (the generalization to composition with

The recursive computation of

_{1}, _{2}, _{3}∈[0, 1]

_{j};

Note also that even the differentiability constraint (4) yields only two further (most outer) adjoint Jacobi fields, namely _{i}, respectively as stated in (10).

To address (22) or (23), we use a gradient descent algorithm, as described in Absil et al. [

Gradient descent algorithm on a manifold

_{k}>0, |

Perform a gradient descent step |

The step sizes are given by the Armijo line search condition presented in Absil et al. [

We set the step size to

As a stopping criterion we use a maximal number _{max} of iterations or a minimal change per iteration

The gradient descent algorithm converges to a critical point if the function ^{1}

The (merely technical) disadvantage of the Log model is that the computation of the gradient involves further Jacobi fields than the one presented above, namely to compute the differentials of the logarithmic map both with respect to its argument _{x}log_{y}_{y}log_{y}

In this section, we provide several examples of our algorithm applied to the fitting problem (20).

We validate it first on the Euclidean space and verify that it retrieves the natural cubic smoothing spline. We then present examples on the sphere ^{2} and the special orthogonal group SO(3). We compare our results with the fast algorithm of Arnould et al. [^{m}, this is a linear system of equations) to the manifold setting; the curve is afterwards reconstructed by a classical De Casteljau algorithm. In the latter, the curve is obtained as a blending of solutions computed on carefully chosen tangent spaces, i.e., Euclidean spaces.

We will show in this section that the proposed method performs as good as the existing methods when the data is from the Eulidean space (section 5.1). This also means that all methods work equally well whenever the data points on the manifold are local enough. However, we will show by the other examples (sections 5.2 and 5.3) that our proposed method outperforms the others whenever the data points are spread out on the manifold.

The following examples were implemented in MVIRT [^{2}^{3}

As a first example, we perform a minimization on the Euclidean space ^{m} is that the curve γ minimizing (1) is the natural (

as well as the control points _{i} = _{i}, and

where the exponential map and geodesic on ℝ^{3} are actually the addition and line segments, respectively. Note that, by construction, the initial curve is continuously differentiable but (obviously) does not minimize (1). The parameter λ is set to 50. The MSA of this initial curve Ã(

The data points are given to the algorithm of Arnould et al. [

The initial interpolating Bézier curve in ℝ^{3} (dashed,

The set of control points ^{−4}, and α = 1. The stopping criteria are ^{−6}.

Both methods improve the initial functional value of Ã(_{min})≈4.981218. Both the linear system approach and the gradient descent perform equally. The difference of objective value is 2.4524 × 10^{−11} smaller for the gradient descent, and the maximal distance of any sampling point of the resulting curves is of size 4.3 × 10^{−7}. Hence, in the Euclidean space, the proposed gradient descent yields the natural cubic spline, as one would expect.

^{3}, where geodesics are great arcs. We use the data points

aligned on the geodesic connecting the north pole _{0} and the south pole _{2}, and running through a point _{1} on the equator. We define the control points of the cubic Bézier curve as follows:

where _{0} and _{2} are temporary points and _{i} = _{i}. We obtain two segments smoothly connected since _{i}.

The initial curve (dashed,

The control points are optimized with our interpolating model, i.e., we fix the start and end points _{0}, _{1}, _{2} and minimize

The curve, as well as the second and first order differences, is sampled with ^{−4}, and α = 1. The stopping criteria are slightly relaxed to 10^{−7} for the distance and 10^{−5} for the gradient, because of the sines and cosines involved in the exponential map.

The result is shown in Figure _{0} to _{2} through _{1}, we measure the perfomance first by looking at the resulting first order difference, which is constant, as can be seen in Figure ^{−6}. These evaluations again validate the quality of the gradient descent.

as well as the control points _{i} = _{i}, and

The remaining control points _{i},

The composite Bézier curves are composed of three segments. The initial curve (dashed,

For fitting, we consider different values of λ and the same parameters as for the last example. The optimized curve fits the data points closer and closer as λ grows, and the limit λ → ∞ yields the interpolation case. On the other hand smaller values of λ yield less fitting, but also a smaller value of the mean squared acceleration. In the limit case, i.e., λ = 0, the curve (more precisely the control points of the Bézier curve) just follows the gradient flow to a geodesic.

The results are collected in Figure

Several corresponding functional values, see Table

Functional values for the three-segment composite Bézier curve on the sphere and different values of λ.

original | 10.6122 |

∞ | 4.1339 |

10 | 1.6592 |

1 | 0.0733 |

0.1 | 0.0010 |

0.01 | 1.0814e-5 |

0.001 | 1.6240e-7 |

0 | 3.5988e-9 |

In this example we take the same data points _{i} as in (25) now interpreted as points on _{0} and since log_{p0}_{3} is not defined.

The initial composite Bézier curve on ^{2} (dashed,

Finally we compare our method with the blended splines introduced in Gousenbourger et al. [

denote the rotation matrices in the

These data points are shown in the first line of Figure

From top row to bottom row: (1) the initial data points (cyan), (2) the control points computed by the blended Bézier approach from Gousenbourger et al. [

We set λ = 10 and discretize (20) with

We perform the blended spline fitting with two segments and cubic splines. The resulting control points are shown in the second row of Figure ^{6}. We obtain the control points shown in the last line of Figure

We further compare both curves by looking at their absolute first order differences. In the third line of Figure

In this paper, we introduced a method to solve the curve fitting problem to data points on a manifold, using composite Bézier curves. We approximate the mean squared acceleration of the curve by a suitable second order difference and a trapezoidal rule, and derive the corresponding gradient with respect to its control points. The gradient is computed in closed form by exploiting the recursive structure of the De Casteljau algorithm. Therefore, we obtain a formula that reduces to a concatenation of adjoint Jacobi fields.

The evaluation of Jacobi fields is the only additional requirement compared to previous methods, which are solely evaluating exponential and logarithmic maps. For these, closed forms are available on symmetric manifolds.

On the Euclidean space our solution reduces to the natural smoothing spline, the unique acceleration minimizing polynomial curve. The numerical experiments further confirm that the method presented in this paper outperforms the tangent space(s)-based approaches with respect to the functional value.

It is still an open question whether there exists a second order absolute finite difference on a manifold, that is jointly convex. Then convergence would follow by standard arguments of the gradient descent. For such a model, another interesting point for future work is to find out whether only an approximate evaluation of the Jacobi fields suffices for convergence. This would mean that, on manifolds where the Jacobi field can only be evaluated approximately by solving an ODE, the presented approach would still converge.

Both authors listed have contributed equally to the content of this work and approved the paper for publication.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

This work was supported by the following fundings: RB gratefully acknowledges support by DFG grant BE 5888/2–1; P-YG gratefully acknowledges support by the Fonds de la Recherche Scientifique – FNRS and the Fonds Wetenschappelijk Onderzoek – Vlaanderen under EOS Project no 30468160, and Communauté française de Belgique - Actions de Recherche Concertées (contract ARC 14/19-060). Figure

^{1}See

^{2}Open source, available at

^{3}Open source, available at