easy_ml/logistic_regression.rs
1/*!
2Logistic regression example
3
4Logistic regression can be used for classification. By performing linear regression on a logit
5function a linear classifier can be obtained that retains probabilistic semantics.
6
7Given some data on a binary classification problem (ie a
8[Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution)),
9transforming the probabilities with a logit function:
10
11<pre>log(p / (1 - p))</pre>
12
13where p is the probability of success, ie P(y=True|x), puts them in the -infinity to infinity
14range. If we assume a simple linear model over two inputs x1 and x2 then:
15
16<pre>log(p / (1 - p)) = w0 + w1*x1 + w2*x2</pre>
17
18For more complex data [basis functions](super::linear_regression)
19can be used on the inputs to model non linearity. Once we have a model we can define the
20objective function to maximise to learn the weights for. Once we have fixed weights
21we can estimate the probability of new data by taking the inverse of the logit function
22(the sigmoid function):
23
24<pre>1 / (1 + e^(-(w0 + w1*x1 + w2*x2)))</pre>
25
26which maps back into the 0 - 1 range and produces a probability for the unseen data. We can then
27choose a cutoff at say 0.5 and we have a classifier that ouputs True for any unseen data estimated
28to have a probability ≥ 0.5 and False otherwise.
29
30# Arriving at the update rule
31
32If the samples are independent of each other, ie knowing P(y<sub>1</sub>=True|x<sub>1</sub>)
33tells you nothing about P(y<sub>1</sub>=True|x<sub>2</sub>), as is the case in a bernoulli
34distribution, then the probability of P(**y**|X) is the product of each
35P(y<sub>i</sub>|**x<sub>i</sub>**). For numerical stability reasons we often want to take logs
36of the probability, which transforms the product into a sum.
37
38log(P(**y**|X)) = the sum over all i data of (log(P(y<sub>i</sub>|**x<sub>i</sub>**)))
39
40Our model sigmoid(**w**<sup>T</sup>**x**) is already defined as P(y<sub>i</sub>=True|**x<sub>i</sub>**) so
41
42P(y<sub>i</sub>) = p<sub>i</sub> if y<sub>i</sub> = 1 and 1 - p<sub>i</sub> if y<sub>i</sub> = 0,
43where p<sub>i</sub> = P(y<sub>i</sub>=True|**x<sub>i</sub>**) = sigmoid(**w**<sup>T</sup>**x**)
44
45this can be converted into a single equation because a<sup>0</sup> = 1
46
47P(y<sub>i</sub>) = (p<sub>i</sub>^y<sub>i</sub>) * ((1 - p<sub>i</sub>)^(1 - y<sub>i</sub>))
48
49putting the two equations together gives the log probability we want to maximise in terms of
50p<sub>i</sub>, which is itself in terms of our model's weights
51
52log(P(**y**|X)) = the sum over all i data of (log((p<sub>i</sub>^y<sub>i</sub>) * (1 - p<sub>i</sub>)^(1 - y<sub>i</sub>)))
53
54by log rules we can remove the exponents
55
56log(P(**y**|X)) = the sum over all i data of (y<sub>i</sub> * log(p<sub>i</sub>) + (1 - y<sub>i</sub>) * log(1 - p<sub>i</sub>)))
57
58we want to maximise P(**y**|X) with our weights so we take the derivative with respect to **w**
59
60d(log(P(**y**|X))) / d**w** = the sum over all i data of ((y<sub>i</sub> - p(y<sub>i</sub>=True|**x<sub>i</sub>**))**x<sub>i</sub>**)
61
62where p(y<sub>i</sub>=True|**x<sub>i</sub>**) = 1 / (1 + e^(-(w0 + w1 * x1 + w2 * x2))) as defined
63earlier. This derivative will maximise log(P(**y**|X)), and as logs are monotonic P(**y**|X) as
64well, when it equals 0. Unfortunately there is no closed form solution so we must perform gradient
65descent to fit **w**. In this example i is small enough so we perform gradient descent over all the
66training data, for big data problems stochastic gradient descent would scale better.
67
68The update rule:
69
70**w<sub>new</sub>** = **w<sub>old</sub>** + learning_rate * (the sum over all i data of (y<sub>i</sub> - [1 / (1 + e^(-(**w**<sup>T</sup>**x**)))])**x<sub>i</sub>**))
71
72# Logistic regression example
73
74## Matrix APIs
75
76```
77// Actual types of datasets logistic regression might be performed on include diagnostic
78// datasets such as cancer/not cancer diagnosis and various measurements of patients.
79// More abstract datasets could be related to coin flipping.
80//
81// To ensure our example is not overly complicated but requires two dimensions for the inputs
82// to estimate the probability distribution of the random variable we sample from, we model
83// two clusters from different classes to classify. As each class is arbitary, we assign the first
84// class as the True case that the model should predict >0.5 probability for, and the second
85// class as the False case that the model should predict <0.5 probability for.
86
87use easy_ml::matrices::Matrix;
88use easy_ml::distributions::MultivariateGaussian;
89
90use rand::{Rng, SeedableRng};
91use rand::distr::{Iter, StandardUniform};
92use rand_chacha::ChaCha8Rng;
93
94use textplots::{Chart, Plot, Shape};
95
96// use a fixed seed random generator from the rand crate
97let mut random_generator = ChaCha8Rng::seed_from_u64(13);
98
99// define two cluster centres using two 2d gaussians, making sure they overlap a bit
100let class1 = MultivariateGaussian::new(
101 Matrix::column(vec![ 2.0, 3.0 ]),
102 Matrix::from(vec![
103 vec![ 1.0, 0.1 ],
104 vec![ 0.1, 1.0 ]]));
105
106// make the second cluster more spread out so there will be a bit of overlap with the first
107// in the (0,0) to (1, 1) area
108let class2 = MultivariateGaussian::new(
109 Matrix::column(vec![ -2.0, -1.0 ]),
110 Matrix::from(vec![
111 vec![ 2.5, 1.2 ],
112 vec![ 1.2, 2.5 ]]));
113
114// Generate 200 points for each cluster
115let points = 200;
116let mut random_numbers: Iter<StandardUniform, &mut ChaCha8Rng, f64> =
117 (&mut random_generator).sample_iter(StandardUniform);
118// we can unwrap here because we deliberately constructed a positive definite covariance matrix
119// and supplied enough random numbers
120let class1_points = class1.draw(&mut random_numbers, points).unwrap();
121let class2_points = class2.draw(&mut random_numbers, points).unwrap();
122
123// Plot each class of the generated data in a scatter plot
124println!("Generated data points");
125
126/**
127 * Helper function to print a scatter plot of a provided matrix with x, y in each row
128 */
129fn scatter_plot(data: &Matrix<f64>) {
130 // textplots expects a Vec<(f32, f32)> where each tuple is a (x,y) point to plot,
131 // so we must transform the data from the cluster points slightly to plot
132 let scatter_points = data.column_iter(0)
133 // zip is used to merge the x and y columns in the data into a single tuple
134 .zip(data.column_iter(1))
135 // finally we map the tuples of (f64, f64) into (f32, f32) for handing to the library
136 .map(|(x, y)| (x as f32, y as f32))
137 .collect::<Vec<(f32, f32)>>();
138 Chart::new(180, 60, -8.0, 8.0)
139 .lineplot(&Shape::Points(&scatter_points))
140 .display();
141}
142
143println!("Classs 1");
144scatter_plot(&class1_points);
145println!("Classs 2");
146scatter_plot(&class2_points);
147
148// for ease of use later we insert a 0th column into both class's points so w0 + w1*x1 + w2*x2
149// can be computed by w^T x
150let class1_inputs = {
151 let mut design_matrix = class1_points.clone();
152 design_matrix.insert_column(0, 1.0);
153 design_matrix
154};
155let class2_inputs = {
156 let mut design_matrix = class2_points.clone();
157 design_matrix.insert_column(0, 1.0);
158 design_matrix
159};
160
161/**
162 * The sigmoid function, taking values in the [-inf, inf] range and mapping
163 * them into the [0, 1] range.
164 */
165fn sigmoid(x: f64) -> f64 {
166 1.0 / (1.0 + ((-x).exp()))
167}
168
169/**
170 * The logit function, taking values in the [0, 1] range and mapping
171 * them into the [-inf, inf] range
172 */
173fn logit(x: f64) -> f64 {
174 (x / (1.0 - x)).ln()
175}
176
177// First we initialise the weights matrix to some initial values
178let mut weights = Matrix::column(vec![ 1.0, 1.0, 1.0 ]);
179
180/**
181 * The log of the likelihood function P(**y**|X). This is what we want to update our
182 * weights to maximise as we want to train the model to predict y given **x**,
183 * where y is the class and **x** is the two features the model takes as input.
184 * It should be noted that something has probably gone wrong if you ever get 100%
185 * performance on your training data, either your training data is linearly seperable
186 * or you are overfitting and the weights won't generalise to predicting the correct
187 * class given unseen inputs.
188 *
189 * This function is mostly defined for completeness, we maximise it using the derivative
190 * and never need to compute it.
191 */
192fn log_likelihood(
193 weights: &Matrix<f64>, class1_inputs: &Matrix<f64>, class2_inputs: &Matrix<f64>
194) -> f64 {
195 // The probability of predicting all inputs as the correct class is the product
196 // of the probability of predicting each individal input and class correctly, as we
197 // assume each sample is independent of each other.
198 let mut likelihood = 1_f64.ln();
199 // the model should predict 1 for each class 1
200 let predictions = (class1_inputs * weights).map(sigmoid);
201 for i in 0..predictions.rows() {
202 likelihood += predictions.get(i, 0).ln();
203 }
204 // the model should predict 0 for each class 2
205 let predictions = (class2_inputs * weights).map(sigmoid).map(|p| 1.0 - p);
206 for i in 0..predictions.rows() {
207 likelihood += predictions.get(i, 0).ln();
208 }
209 likelihood
210}
211
212/**
213 * The derivative of the negative log likelihood function, which we want to set to 0 in order
214 * to maximise P(**y**|X).
215 */
216fn update_function(
217 weights: &Matrix<f64>, class1_inputs: &Matrix<f64>, class2_inputs: &Matrix<f64>
218) -> Matrix<f64> {
219 let mut derivative = Matrix::column(vec![ 0.0, 0.0, 0.0 ]);
220
221 // compute y - predictions for all the first class of inputs
222 let prediction_errors = (class1_inputs * weights).map(sigmoid).map(|p| 1.0 - p);
223 for i in 0..prediction_errors.rows() {
224 // compute diff * x_i
225 let diff = prediction_errors.get(i, 0);
226 let ith_error = Matrix::column(class1_inputs.row_iter(i).collect()).map(|x| x * diff);
227 derivative = derivative + ith_error;
228 }
229
230 // compute y - predictions for all the second class of inputs
231 let prediction_errors = (class2_inputs * weights).map(sigmoid).map(|p| 0.0 - p);
232 for i in 0..prediction_errors.rows() {
233 // compute diff * x_i
234 let diff = prediction_errors.get(i, 0);
235 let ith_error = Matrix::column(class2_inputs.row_iter(i).collect()) * diff;
236 derivative = derivative + ith_error;
237 }
238
239 derivative
240}
241
242let learning_rate = 0.002;
243
244let mut log_likelihood_progress = Vec::with_capacity(25);
245
246// For this example we cheat and have simply found what number of iterations and learning rate
247// yields a correct decision boundry so don't actually check for convergence. In a real example
248// you would stop once the updates for the weights become 0 or very close to 0.
249for i in 0..25 {
250 let update = update_function(&weights, &class1_inputs, &class2_inputs);
251 weights = weights + (update * learning_rate);
252 log_likelihood_progress.push(
253 (i as f32, log_likelihood(&weights, &class1_inputs, &class2_inputs) as f32)
254 );
255}
256
257println!("Log likelihood over 25 iterations (bigger is better as logs are monotonic)");
258Chart::new(180, 60, 0.0, 15.0)
259 .lineplot(&Shape::Lines(&log_likelihood_progress))
260 .display();
261
262println!("Decision boundry after 25 iterations");
263decision_boundry(&weights);
264
265// The model should have learnt to classify class 1 correctly at the expected value
266// of the cluster
267assert!(
268 sigmoid(
269 (weights.transpose() * Matrix::column(vec![ 1.0, 2.0, 3.0])).scalar()
270 ) > 0.5);
271
272// The model should have learnt to classify class 2 correctly at the expected value
273// of the cluster
274assert!(
275 sigmoid(
276 (weights.transpose() * Matrix::column(vec![ 1.0, -2.0, -1.0])).scalar()
277 ) < 0.5);
278
279/**
280 * A utility function to plot the decision boundry of the model. As the terminal plotting
281 * library doesn't support colored plotting when showing unit test output this is a little
282 * challenging to do given we have two dimensions of inputs and one dimension of output which is
283 * also real valued as logistic regression computes probability. This could best be done with a 3d
284 * plot or a heatmap, but is done with this function by taking 0.5 as the cutoff for
285 * classification, generating a grid of points in the two dimensional space and classifying all
286 * of them, then plotting the ones classified as class 1.
287 */
288fn decision_boundry(weights: &Matrix<f64>) {
289 // compute a matrix of coordinate pairs from (-8.0, -8.0) to (8.0, 8.0)
290 let grid_values = Matrix::empty(0.0, (160, 160));
291 // create a matrix of tuples combining every combination of coordinates
292 let grid_values = grid_values.map_with_index(|_, i, j| (
293 (i as f64 - 80.0) * 0.1, (j as f64 - 80.0) * 0.1)
294 );
295 // iterate through every tuple and see if the model predicts class 1
296 let points = grid_values.column_major_iter()
297 .map(|(x1, x2)| {
298 let input = Matrix::column(vec![ 1.0, x1, x2 ]);
299 let prediction = sigmoid((weights.transpose() * input).scalar());
300 return if prediction > 0.5 {
301 (x1, x2, 1)
302 } else {
303 (x1, x2, 0)
304 }
305 })
306 .filter(|(_, _, class)| class == &1)
307 .map(|(x1, x2, _)| (x1 as f32, x2 as f32))
308 .collect::<Vec<(f32, f32)>>();
309 Chart::new(180, 60, -8.0, 8.0)
310 .lineplot(&Shape::Points(&points))
311 .display();
312}
313```
314
315## Tensor APIs
316```
317// Actual types of datasets logistic regression might be performed on include diagnostic
318// datasets such as cancer/not cancer diagnosis and various measurements of patients.
319// More abstract datasets could be related to coin flipping.
320//
321// To ensure our example is not overly complicated but requires two dimensions for the inputs
322// to estimate the probability distribution of the random variable we sample from, we model
323// two clusters from different classes to classify. As each class is arbitary, we assign the first
324// class as the True case that the model should predict >0.5 probability for, and the second
325// class as the False case that the model should predict <0.5 probability for.
326use easy_ml::tensors::Tensor;
327use easy_ml::distributions::MultivariateGaussianTensor;
328
329use rand::{Rng, SeedableRng};
330use rand::distr::{Iter, StandardUniform};
331use rand_chacha::ChaCha8Rng;
332
333use textplots::{Chart, Plot, Shape};
334
335// use a fixed seed random generator from the rand crate
336let mut random_generator = ChaCha8Rng::seed_from_u64(13);
337
338// define two cluster centres using two 2d gaussians, making sure they overlap a bit
339let class1 = MultivariateGaussianTensor::new(
340 Tensor::from([("means", 2)], vec![ 2.0, 3.0 ]),
341 Tensor::from(
342 [("rows", 2), ("columns", 2)],
343 vec![
344 1.0, 0.1,
345 0.1, 1.0
346 ]
347 )
348).unwrap(); // unwrapping here is fine because this is constructed from a known valid input
349
350// make the second cluster more spread out so there will be a bit of overlap with the first
351// in the (0,0) to (1, 1) area
352let class2 = MultivariateGaussianTensor::new(
353 Tensor::from([("means", 2)], vec![ -2.0, -1.0 ]),
354 Tensor::from(
355 [("rows", 2), ("columns", 2)],
356 vec![
357 2.5, 1.2,
358 1.2, 2.5
359 ]
360 )
361).unwrap(); // unwrapping here is fine because this is constructed from a known valid input
362
363// Generate 200 points for each cluster
364let points = 200;
365let mut random_numbers: Iter<StandardUniform, &mut ChaCha8Rng, f64> =
366 (&mut random_generator).sample_iter(StandardUniform);
367// we can unwrap here because we deliberately constructed a positive definite covariance matrix
368// and supplied enough random numbers
369let class1_points = class1.draw(&mut random_numbers, points, "data", "feature").unwrap();
370let class2_points = class2.draw(&mut random_numbers, points, "data", "feature").unwrap();
371
372// Plot each class of the generated data in a scatter plot
373println!("Generated data points");
374
375/**
376 * Helper function to print a scatter plot of a provided matrix with x, y in each row
377 */
378fn scatter_plot(data: &Tensor<f64, 2>) {
379 // textplots expects a Vec<(f32, f32)> where each tuple is a (x,y) point to plot,
380 // so we must transform the data from the cluster points slightly to plot
381 let scatter_points = data
382 .select([("feature", 0)])
383 .iter()
384 // zip is used to merge the x and y columns in the data into a single tuple
385 .zip(data.select([("feature", 1)]).iter())
386 // finally we map the tuples of (f64, f64) into (f32, f32) for handing to the library
387 .map(|(x, y)| (x as f32, y as f32))
388 .collect::<Vec<(f32, f32)>>();
389 Chart::new(180, 60, -8.0, 8.0)
390 .lineplot(&Shape::Points(&scatter_points))
391 .display();
392}
393
394println!("Classs 1");
395scatter_plot(&class1_points);
396println!("Classs 2");
397scatter_plot(&class2_points);
398
399// for ease of use later we insert a 0th column into both class's points so w0 + w1*x1 + w2*x2
400// can be computed by w^T x
401fn design_matrix(points: &Tensor<f64, 2>) -> Tensor<f64, 2> {
402 let mut design_matrix = Tensor::empty(
403 [("data", points.shape()[0].1), ("feature", 3)],
404 1.0
405 );
406 let mut data = points.iter();
407 for ([_row, feature], x) in design_matrix.iter_reference_mut().with_index() {
408 *x = match feature {
409 0 => 1.0,
410 1 | 2 | _ => data.next().unwrap(),
411 };
412 }
413 design_matrix
414}
415let class1_inputs = design_matrix(&class1_points);
416let class2_inputs = design_matrix(&class2_points);
417
418/**
419 * The sigmoid function, taking values in the [-inf, inf] range and mapping
420 * them into the [0, 1] range.
421 */
422fn sigmoid(x: f64) -> f64 {
423 1.0 / (1.0 + ((-x).exp()))
424}
425
426/**
427 * The logit function, taking values in the [0, 1] range and mapping
428 * them into the [-inf, inf] range
429 */
430fn logit(x: f64) -> f64 {
431 (x / (1.0 - x)).ln()
432}
433
434// First we initialise the weights matrix to some initial values
435let mut weights = Tensor::from([("weights", 3)], vec![ 1.0, 1.0, 1.0 ]);
436
437/**
438 * The log of the likelihood function P(**y**|X). This is what we want to update our
439 * weights to maximise as we want to train the model to predict y given **x**,
440 * where y is the class and **x** is the two features the model takes as input.
441 * It should be noted that something has probably gone wrong if you ever get 100%
442 * performance on your training data, either your training data is linearly seperable
443 * or you are overfitting and the weights won't generalise to predicting the correct
444 * class given unseen inputs.
445 *
446 * This function is mostly defined for completeness, we maximise it using the derivative
447 * and never need to compute it.
448 */
449fn log_likelihood(
450 weights: &Tensor<f64, 1>, class1_inputs: &Tensor<f64, 2>, class2_inputs: &Tensor<f64, 2>
451) -> f64 {
452 // The probability of predicting all inputs as the correct class is the product
453 // of the probability of predicting each individal input and class correctly, as we
454 // assume each sample is independent of each other.
455 let mut likelihood = 1_f64.ln();
456 // the model should predict 1 for each class 1
457 let predictions = (class1_inputs * weights.expand([(1, "columns")])).map(sigmoid);
458 for x in predictions.iter() {
459 likelihood += x.ln();
460 }
461 // the model should predict 0 for each class 2
462 let predictions = (class2_inputs * weights.expand([(1, "columns")])).map(sigmoid).map(|p| 1.0 - p);
463 for x in predictions.iter() {
464 likelihood += x.ln();
465 }
466 likelihood
467}
468
469/**
470 * The derivative of the negative log likelihood function, which we want to set to 0 in order
471 * to maximise P(**y**|X).
472 */
473fn update_function(
474 weights: &Tensor<f64, 1>, class1_inputs: &Tensor<f64, 2>, class2_inputs: &Tensor<f64, 2>
475) -> Tensor<f64, 1> {
476 let mut derivative = Tensor::from([("weights", 3)], vec![ 0.0, 0.0, 0.0 ]);
477
478 // compute y - predictions for all the first class of inputs
479 let prediction_errors = (class1_inputs * weights.expand([(1, "columns")]))
480 .map(sigmoid).map(|p| 1.0 - p);
481 for ([row, _feature], diff) in prediction_errors.iter().with_index() {
482 // compute diff * x_i
483 let mut ith_error = class1_inputs
484 .select([("data", row)])
485 .map(|x| x * diff);
486 ith_error.rename(["weights"]);
487 derivative = derivative + ith_error;
488 }
489
490 // compute y - predictions for all the second class of inputs
491 let prediction_errors = (class2_inputs * weights.expand([(1, "columns")]))
492 .map(sigmoid).map(|p| 0.0 - p);
493 for ([row, _feature], diff) in prediction_errors.iter().with_index() {
494 // compute diff * x_i
495 let mut ith_error = class2_inputs
496 .select([("data", row)])
497 .map(|x| x * diff);
498 ith_error.rename(["weights"]);
499 derivative = derivative + ith_error;
500 }
501
502 derivative
503}
504
505let learning_rate = 0.002;
506
507let mut log_likelihood_progress = Vec::with_capacity(25);
508
509// For this example we cheat and have simply found what number of iterations and learning rate
510// yields a correct decision boundry so don't actually check for convergence. In a real example
511// you would stop once the updates for the weights become 0 or very close to 0.
512for i in 0..25 {
513 let update = update_function(&weights, &class1_inputs, &class2_inputs);
514 weights = weights + (update * learning_rate);
515 log_likelihood_progress.push(
516 (i as f32, log_likelihood(&weights, &class1_inputs, &class2_inputs) as f32)
517 );
518}
519
520println!("Log likelihood over 25 iterations (bigger is better as logs are monotonic)");
521Chart::new(180, 60, 0.0, 15.0)
522 .lineplot(&Shape::Lines(&log_likelihood_progress))
523 .display();
524
525println!("Decision boundry after 25 iterations");
526decision_boundry(&weights);
527
528// The model should have learnt to classify class 1 correctly at the expected value
529// of the cluster
530assert!(
531 sigmoid(
532 weights.scalar_product(Tensor::from([("weights", 3)], vec![ 1.0, 2.0, 3.0]))
533 ) > 0.5);
534
535// The model should have learnt to classify class 2 correctly at the expected value
536// of the cluster
537assert!(
538 sigmoid(
539 weights.scalar_product(Tensor::from([("weights", 3)], vec![ 1.0, -2.0, -1.0]))
540 ) < 0.5);
541
542/**
543 * A utility function to plot the decision boundry of the model. As the terminal plotting
544 * library doesn't support colored plotting when showing unit test output this is a little
545 * challenging to do given we have two dimensions of inputs and one dimension of output which is
546 * also real valued as logistic regression computes probability. This could best be done with a 3d
547 * plot or a heatmap, but is done with this function by taking 0.5 as the cutoff for
548 * classification, generating a grid of points in the two dimensional space and classifying all
549 * of them, then plotting the ones classified as class 1.
550 */
551fn decision_boundry(weights: &Tensor<f64, 1>) {
552 // compute a matrix of coordinate pairs from (-8.0, -8.0) to (8.0, 8.0)
553 let grid_values = Tensor::empty([("x", 160), ("y", 160)], 0.0);
554 // create a matrix of tuples combining every combination of coordinates
555 let grid_values = grid_values.map_with_index(|[i, j], _| (
556 (i as f64 - 80.0) * 0.1, (j as f64 - 80.0) * 0.1)
557 );
558 // iterate through every tuple and see if the model predicts class 1
559 let points = grid_values.index_by(["y", "x"])
560 .iter()
561 .map(|(x1, x2)| {
562 let input = Tensor::from([("weights", 3)], vec![ 1.0, x1, x2 ]);
563 let prediction = weights.scalar_product(input);
564 return if prediction > 0.5 {
565 (x1, x2, 1)
566 } else {
567 (x1, x2, 0)
568 }
569 })
570 .filter(|(_, _, class)| class == &1)
571 .map(|(x1, x2, _)| (x1 as f32, x2 as f32))
572 .collect::<Vec<(f32, f32)>>();
573 Chart::new(180, 60, -8.0, 8.0)
574 .lineplot(&Shape::Points(&points))
575 .display();
576}
577```
578*/