borglab/gtsam

How to marginalize historical factors

JACKLiuDay opened this issue · 2 comments

Hi, thank you for your team's great job about GTSAM. I am trying to use gtsam for my lidar slam system. What should I do if I want to delete part of the accumulated historical factors? For example, I have accumulated 100 factors now, and I want to eliminate the first ten and then optimize and update them. Are there any tutorials or similar that can be provided for newbies to learn from?

I tried the code in gtsam_unstable, the FixedLagIncrementalSmoother. But I don't know how to use it. In the examples file, the timestamp changes. I don't understand how the fixed lag worked.

double deltaT = 0.25;
for(double time = deltaT; time <= 3.0; time += deltaT)
{

// Define the keys related to this timestamp
Key previousKey(1000 * (time-deltaT));
Key currentKey(1000 * (time));

// Assign the current key to the current timestamp
newTimestamps[currentKey] = time;

// Add a guess for this pose to the new values
// Since the robot moves forward at 2 m/s, then the position is simply: time[s]*2.0[m/s]
// {This is not a particularly good way to guess, but this is just an example}
Pose2 currentPose(time * 2.0, 0.0, 0.0);
newValues.insert(currentKey, currentPose);

// Add odometry factors from two different sources with different error stats
Pose2 odometryMeasurement1 = Pose2(0.61, -0.08, 0.02);
noiseModel::Diagonal::shared_ptr odometryNoise1 = noiseModel::Diagonal::Sigmas(Vector3(0.1, 0.1, 0.05));
newFactors.push_back(BetweenFactor<Pose2>(previousKey, currentKey, odometryMeasurement1, odometryNoise1));

Pose2 odometryMeasurement2 = Pose2(0.47, 0.03, 0.01);
noiseModel::Diagonal::shared_ptr odometryNoise2 = noiseModel::Diagonal::Sigmas(Vector3(0.05, 0.05, 0.05));
newFactors.push_back(BetweenFactor<Pose2>(previousKey, currentKey, odometryMeasurement2, odometryNoise2));

// Update the smoothers with the new factors. In this example, batch smoother needs one iteration
// to accurately converge. The ISAM smoother doesn't, but we only start getting estiates when
// both are ready for simplicity.
if (time >= 0.50) 
{
  smootherBatch.update(newFactors, newValues, newTimestamps);
  smootherISAM2.update(newFactors, newValues, newTimestamps);
  for(size_t i = 1; i < 2; ++i) 
  {   // Optionally perform multiple iSAM2 iterations
    smootherISAM2.update();
  }

  // Print the optimized current pose
  cout << setprecision(5) << "Timestamp = " << time << endl;
  smootherBatch.calculateEstimate<Pose2>(currentKey).print("Batch Estimate:");
  smootherISAM2.calculateEstimate<Pose2>(currentKey).print("iSAM2 Estimate:");
  cout << endl;

  // Clear contains for the next iteration
  newTimestamps.clear();
  newValues.clear();
  newFactors.resize(0);
}

}
In this code, according to my understanding, only keys with timestamps within lag seconds will be optimized? Is this correct to understand?