New version of the lecture

This commit is contained in:
2024-02-02 09:49:21 +01:00
parent 050956c01b
commit f6431654dc
190 changed files with 7703 additions and 2629 deletions

2125
Lecture.html Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +1,16 @@
---
title: "Biodiversity metrics \ and metabarcoding"
author: "Eric Coissac"
date: "28/01/2019"
date: "02/02/2024"
bibliography: inst/REFERENCES.bib
output:
ioslides_presentation:
widescreen: true
smaller: true
css: slides.css
mathjax: local
self_contained: false
slidy_presentation: default
format:
revealjs:
smaller: true
transition: slide
scrollable: true
theme: simple
html-math-method: mathjax
editor: visual
---
```{r setup, include=FALSE}
@@ -25,20 +25,19 @@ opts_chunk$set(echo = FALSE,
cache.lazy = FALSE)
```
# Summary
- The MetabarSchool Package
- What do the reading numbers per PCR mean?
- Rarefaction vs. relative frequencies
- alpha diversity metrics
- beta diversity metrics
- multidimentionnal analysis
- comparison between datasets
- The MetabarSchool Package
- What do the reading numbers per PCR mean?
- Rarefaction vs. relative frequencies
- alpha diversity metrics
- beta diversity metrics
- multidimentionnal analysis
- comparison between datasets
# The MetabarSchool Package
## Instaling the package
## Installing the package
You need the *devtools* package
@@ -62,7 +61,7 @@ install.packages("vegan",dependencies = TRUE)
## The mock community {.flexbox .vcenter .smaller}
A 16 plants mock community
A 16 plants mock community
```{r}
data("plants.16")
@@ -81,13 +80,12 @@ knitr::kable(x[,-(4:5)],
data("positive.samples")
```
- `r nrow(positive.samples)` PCR of the mock community using SPER02 trnL-P6-Loop primers
- `r nrow(positive.samples)` PCR of the mock community using SPER02 trnL-P6-Loop primers
- `r length(table(positive.samples$dilution))` dilutions of the mock
community: `r paste0('1/',names(table(positive.samples$dilution)))`
- `r length(table(positive.samples$dilution))` dilutions of the mock community: `r paste0('1/',names(table(positive.samples$dilution)))`
- `r as.numeric(table(positive.samples$dilution)[1])` repeats per dilution
- `r as.numeric(table(positive.samples$dilution)[1])` repeats per dilution
## Loading data
```{r echo=TRUE}
@@ -98,9 +96,7 @@ data("positive.samples")
data("positive.motus")
```
- `positive.count` read count matrix
$`r nrow(positive.count)` \; PCRs \; \times \; `r ncol(positive.count)` \; MOTUs$
- `positive.count` read count matrix $`r nrow(positive.count)` \; PCRs \; \times \; `r ncol(positive.count)` \; MOTUs$
```{r}
knitr::kable(positive.count[1:5,1:5],
@@ -111,11 +107,11 @@ knitr::kable(positive.count[1:5,1:5],
```
<br>
```{r echo=TRUE,eval=FALSE}
positive.count[1:5,1:5]
```
## Loading data
```{r echo=TRUE}
@@ -126,9 +122,7 @@ data("positive.samples")
data("positive.motus")
```
- `positive.samples` a `r nrow(positive.samples)` rows `data.frame` of
`r ncol(positive.samples)` columns describing each PCR
- `positive.samples` a `r nrow(positive.samples)` rows `data.frame` of `r ncol(positive.samples)` columns describing each PCR
```{r}
knitr::kable(head(positive.samples,n=3),
@@ -138,11 +132,11 @@ knitr::kable(head(positive.samples,n=3),
```
<br>
```{r echo=TRUE,eval=FALSE}
head(positive.samples,n=3)
```
## Loading data
```{r echo=TRUE}
@@ -153,8 +147,7 @@ data("positive.samples")
data("positive.motus")
```
- `positive.motus` : a `r nrow(positive.motus)` rows `data.frame` of
`r ncol(positive.motus)` columns describing each MOTU
- `positive.motus` : a `r nrow(positive.motus)` rows `data.frame` of `r ncol(positive.motus)` columns describing each MOTU
```{r}
knitr::kable(head(positive.motus,n=3),
@@ -164,11 +157,12 @@ knitr::kable(head(positive.motus,n=3),
```
<br>
```{r echo=TRUE,eval=FALSE}
head(positive.motus,n=3)
```
## Removing singleton sequences {.flexbox .vcenter}
## Removing singleton sequences {.flexbox .vcenter}
Singleton sequences are observed only once over the complete dataset.
@@ -176,7 +170,6 @@ Singleton sequences are observed only once over the complete dataset.
table(colSums(positive.count) == 1)
```
```{r}
kable(t(table(colSums(positive.count) == 1)),
format = "html") %>%
@@ -194,11 +187,9 @@ positive.count = positive.count[,are.not.singleton]
positive.motus = positive.motus[are.not.singleton,]
```
- `positive.count` is now a
$`r nrow(positive.count)` \; PCRs \; \times \; `r ncol(positive.count)` \; MOTUs$
matrix
## Not all the PCR have the same number of reads {.flexbox .vcenter}
- `positive.count` is now a $`r nrow(positive.count)` \; PCRs \; \times \; `r ncol(positive.count)` \; MOTUs$ matrix
## Not all the PCR have the same number of reads {.flexbox .vcenter}
Despite all standardization efforts
@@ -210,9 +201,9 @@ hist(rowSums(positive.count),
main = "Number of read per PCR")
```
<div class="green">
::: green
Is it related to the amount of DNA in the extract ?
</div>
:::
## What do the reading numbers per PCR mean? {.smaller}
@@ -222,17 +213,13 @@ boxplot(rowSums(positive.count) ~ positive.samples$dilution,log="y")
abline(h = median(rowSums(positive.count)),lw=2,col="red",lty=2)
```
```{r}
SC = summary(aov((rowSums(positive.count)) ~ positive.samples$dilution))[[1]]$`Sum Sq`
```
<div class="red2">
<center>
Only `r round((SC/sum(SC)*100)[1],1)`% of the PCR read count
variation is explain by dilution
</center>
</div>
::: red2
<center>Only `r round((SC/sum(SC)*100)[1],1)`% of the PCR read count variation is explain by dilution</center>
:::
## You must normalize your read counts
@@ -242,7 +229,6 @@ Two options:
Randomly subsample the same number of reads for all the PCRs
### Relative frequencies
Divide the read count of each MOTU in each sample by the total total read count of the same sample
@@ -255,9 +241,9 @@ $$
library(vegan)
```
## Rarefying read count (1) {.flexbox .vcenter}
## Rarefying read count (1) {.flexbox .vcenter}
- We look for the minimum read number per PCR
- We look for the minimum read number per PCR
```{r echo=TRUE}
min(rowSums(positive.count))
@@ -267,7 +253,7 @@ min(rowSums(positive.count))
positive.count.rarefied = rrarefy(positive.count,2000)
```
## Rarefying read count (2) {.flexbox .vcenter}
## Rarefying read count (2) {.flexbox .vcenter}
```{r fig.height=4}
par(mfrow=c(1,2),bg=NA)
@@ -285,7 +271,7 @@ hist(log10(colSums(positive.count.rarefied)+1),
xlab = TeX("$\\log_{10}(reads per MOTUs)$"))
```
## Rarefying read count (3) {.flexbox .vcenter}
## Rarefying read count (3) {.flexbox .vcenter}
Identifying the MOTUs with reads count greater than $0$ after rarefaction.
@@ -298,7 +284,7 @@ are.still.present[1:5]
table(are.still.present)
```
## Rarefying read count (4) {.flexbox .vcenter}
## Rarefying read count (4) {.flexbox .vcenter}
```{r echo=TRUE, fig.height=3.5}
par(bg=NA)
@@ -309,7 +295,7 @@ The MOTUs removed by rarefaction were at most occurring `r max(colSums(positive.
The MOTUs kept by rarefaction were at least occurring `r min(colSums(positive.count[,are.still.present]))` times
## Rarefying read count (5) {.vcenter}
## Rarefying read count (5) {.vcenter}
### Keep only sequences with reads after rarefaction
@@ -318,9 +304,7 @@ positive.count.rarefied = positive.count.rarefied[,are.still.present]
positive.motus.rare = positive.motus[are.still.present,]
```
<center>
positive.motus.rare is now a $`r nrow(positive.count.rarefied)` \; PCRs \; \times \; `r ncol(positive.count.rarefied)` \; MOTUs$
</center>
<center>positive.motus.rare is now a $`r nrow(positive.count.rarefied)` \; PCRs \; \times \; `r ncol(positive.count.rarefied)` \; MOTUs$</center>
## Why rarefying ? {.vcenter .columns-2}
@@ -328,8 +312,7 @@ positive.motus.rare is now a $`r nrow(positive.count.rarefied)` \; PCRs \; \time
knitr::include_graphics("figures/subsampling.svg")
```
<br><br><br><br>
Increasing the number of reads just increase the description of the subpart of the PCR you have sequenced.
<br><br><br><br> Increasing the number of reads just increase the description of the subpart of the PCR you have sequenced.
## Transforming read counts to relative frequencies
@@ -348,36 +331,30 @@ table(colSums(positive.count.relfreq) == 0)
## The different types of diversity {.vcenter}
<div style="float: left; width: 40%;">
::: {style="float: left; width: 40%;"}
```{r}
knitr::include_graphics("figures/diversity.svg")
```
</div>
:::
<div style="float: left; width: 60%;">
::: {style="float: left; width: 60%;"}
<br><br> @Whittaker:10:00 <br><br><br><br>
<br><br>
@Whittaker:10:00
<br><br><br><br>
- $\alpha\text{-diversity}$ : Mean diversity per site ($species/site$)
- $\alpha\text{-diversity}$ : Mean diversity per site ($species/site$)
- $\gamma\text{-diversity}$ : Regional biodiversity ($species/region$)
- $\beta\text{-diversity}$ : $\beta = \frac{\gamma}{\alpha}$ ($sites/region$)
</div>
- $\gamma\text{-diversity}$ : Regional biodiversity ($species/region$)
- $\beta\text{-diversity}$ : $\beta = \frac{\gamma}{\alpha}$ ($sites/region$)
:::
# $\alpha$-diversity
## Which is th most diverse environment ? {.flexbox .vcenter}
## Which is th most diverse environment ? {.flexbox .vcenter}
```{r out.width = "400px"}
knitr::include_graphics("figures/alpha_diversity.svg")
```
```{r out.width = "400px"}
E1 = c(A=0.25,B=0.25,C=0.25,D=0.25,E=0,F=0,G=0)
E2 = c(A=0.55,B=0.07,C=0.02,D=0.17,E=0.07,F=0.07,G=0.03)
@@ -388,8 +365,7 @@ kable(environments,
kable_styling(position = "center")
```
## Richness {.flexbox .vcenter}
## Richness {.flexbox .vcenter}
The actual number of species present in your environement whatever their aboundances
@@ -410,21 +386,19 @@ kable(data.frame(S=S),
## Gini-Simpson's index {.smaller}
<div style="float: left; width: 60%;">
The Simpson's index is the probability of having the same species twice when you randomly select two specimens.
<br>
<br>
</div>
<div style="float: right; width: 40%;">
::: {style="float: left; width: 60%;"}
The Simpson's index is the probability of having the same species twice when you randomly select two specimens. <br> <br>
:::
::: {style="float: right; width: 40%;"}
$$
\lambda =\sum _{i=1}^{S}p_{i}^{2}
$$
<br>
</div>
$$ <br>
:::
<center>
$\lambda$ decrease when complexity of your ecosystem increase.
$\lambda$ decrease when complexity of your ecosystem increase.
Gini-Simpson's index defined as $1-\lambda$ increase with diversity
@@ -445,24 +419,22 @@ kable(data.frame(`Gini-Simpson`=GS),
kable_styling(position = "center")
```
## Shannon entropy {.smaller}
## Shannon entropy {.smaller}
Shannon entropy is based on information theory:
<center>
$H^{\prime }=-\sum _{i=1}^{S}p_{i}\log p_{i}$
</center>
<center>$H^{\prime }=-\sum _{i=1}^{S}p_{i}\log p_{i}$</center>
if $A$ is a community where every species are equally represented then
$$
if $A$ is a community where every species are equally represented then $$
H(A) = \log|A|
$$
<center>
```{r out.width = "400px"}
knitr::include_graphics("figures/alpha_diversity.svg")
```
</center>
```{r echo=TRUE}
@@ -476,25 +448,26 @@ kable(data.frame(`Shannon index`=H),
kable_styling(position = "center")
```
## Hill's number {.smaller}
## Hill's number {.smaller}
<div style="float: left; width: 50%;">
As :
$$
::: {style="float: left; width: 50%;"}
As : $$
H(A) = \log|A| \;\Rightarrow\; ^1D = e^{H(A)}
$$
<br>
</div>
<div style="float: right; width: 50%;">
$$ <br>
:::
::: {style="float: right; width: 50%;"}
where $^1D$ is the theoretical number of species in a evenly distributed community that would have the same Shannon's entropy than ours.
</div>
:::
<center>
<BR>
<BR>
<BR> <BR>
```{r out.width = "400px"}
knitr::include_graphics("figures/alpha_diversity.svg")
```
</center>
```{r echo=TRUE}
@@ -513,16 +486,16 @@ kable(data.frame(`Hill Numbers`=D2),
Based on the generalized entropy @Tsallis:94:00 we can propose a generalized form of logarithm.
$$
^q\log(x) = \frac{x^{(1-q)}}{1-q}
^q\log(x) = \frac{x^{(1-q)}-1}{1-q}
$$
The function is not defined for $q=1$ but when $q \longrightarrow 1\;,\; ^q\log(x) \longrightarrow \log(x)$
The function is not defined for $q=1$ but when $q \longrightarrow 1\;,\; ^q\log(x) \longrightarrow \log(x)$
$$
^q\log(x) = \left\{
\begin{align}
\log(x),& \text{if } x = 1\\
\frac{x^{(1-q)}}{1-q},& \text{otherwise}
\log(x),& \text{if } q = 1\\
\frac{x^{(1-q)}-1}{1-q},& \text{otherwise}
\end{align}
\right.
$$
@@ -568,6 +541,7 @@ $$
\end{align}
\right.
$$
```{r echo=TRUE, eval=FALSE}
exp_q = function(x,q=1) {
if (q==1)
@@ -589,12 +563,12 @@ H_q = function(x,q=1) {
}
```
and generalized the previously presented Hill's number
$$
^qD=^qe^{^qH}
$$
```{r echo=TRUE, eval=FALSE}
D_q = function(x,q=1) {
exp_q(H_q(x,q),q)
@@ -641,27 +615,29 @@ points(qs,environments.dq[,1],type="l",col="blue")
abline(v=c(0,1,2),lty=2,col=4:6)
```
## Generalized entropy $vs$ $\alpha$-diversity indices
## Generalized entropy $vs$ $\alpha$-diversity indices
- $^0H(X) = S - 1$ : the richness minus one.
- $^0H(X) = S - 1$ : the richness minus one.
- $^1H(X) = H^{\prime}$ : the Shannon's entropy.
- $^1H(X) = H^{\prime}$ : the Shannon's entropy.
- $^2H(X) = 1 - \lambda$ : Gini-Simpson's index.
- $^2H(X) = 1 - \lambda$ : Gini-Simpson's index.
### When computing the exponential of entropy : Hill's number {.smaller}
- $^0D(X) = S$ : The richness.
- $^0D(X) = S$ : The richness.
- $^1D(X) = e^{H^{\prime}}$ : The number of species in an even community having the same $H^{\prime}$.
- $^1D(X) = e^{H^{\prime}}$ : The number of species in an even community having the same $H^{\prime}$.
- $^2D(X) = 1 / \lambda$ : The number of species in an even community having the same Gini-Simpson's index.
- $^2D(X) = 1 / \lambda$ : The number of species in an even community having the same Gini-Simpson's index.
<br>
<center>
$q$ can be considered as a penality you give to rare species
**when $q=0$ all the species have the same weight**
**when** $q=0$ all the species have the same weight
</center>
@@ -695,6 +671,7 @@ positive.H = apply(positive.count.relfreq,
FUN = H_spectrum,
q=qs)
```
```{r}
par(bg=NA)
boxplot(t(positive.H),
@@ -706,7 +683,6 @@ points(H.mock,col="red",type="l")
## Biodiversity spectrum and metabarcoding (2) {.flexbox .vcenter .smaller}
```{r}
par(bg=NA)
boxplot(t(positive.H)[,11:31],
@@ -743,9 +719,9 @@ positive.D.means = rowMeans(positive.D)
We realize a basic cleaning:
- removing signletons
- too short or long sequences
- clustering data using `obiclean`
- removing signletons
- too short or long sequences
- clustering data using `obiclean`
```{bash eval=FALSE,echo=TRUE}
obigrep -p 'count > 1' \
@@ -761,7 +737,6 @@ obiclean -s merged_sample -H -C -r 0.1 \
> positifs.uniq.annotated.clean.fasta
```
## Impact of data cleaning on $\alpha$-diversity (2)
```{r echo=TRUE}
@@ -805,16 +780,11 @@ points(D.mock,col="red",type="l")
positive.clean.D.means = rowMeans(positive.D)
```
# $\beta$-diversity
## Dissimilarity indices or non-metric distances {.flexbox .vcenter}
<center>
A dissimilarity index $d(A,B)$ is a numerical measurement
<br>
of how far apart objects $A$ and $B$ are.
</center>
<center>A dissimilarity index $d(A,B)$ is a numerical measurement <br> of how far apart objects $A$ and $B$ are.</center>
### Properties
@@ -844,19 +814,17 @@ $$
J(A,B) = {{|A \cap B|}\over{|A \cup B|}} = {{|A \cap B|}\over{|A| + |B| - |A \cap B|}}.
$$
## Metrics or distances
## Metrics or distances
<div style="float: left; width: 50%;">
::: {style="float: left; width: 50%;"}
```{r out.width = "400px"}
knitr::include_graphics("figures/metric.svg")
```
</div>
<div style="float: right; width: 50%;">
:::
::: {style="float: right; width: 50%;"}
A metric is a dissimilarity index verifying the *subadditivity* also named *triangle inequality*
$$
\begin{align}
d(A,B) \geqslant& 0 \\
@@ -865,20 +833,18 @@ d(A,B) =& \;0 \iff A = B \\
d(A,B) \leqslant& \;d(A,C) + d(C,B)
\end{align}
$$
:::
</div>
## Some metrics
<div style="float: left; width: 50%;">
## Some metrics
::: columns
::: {.column width="40%"}
```{r out.width = "400px"}
knitr::include_graphics("figures/Distance.svg")
```
:::
</div>
<div style="float: right; width: 50%;">
::: {.column width="60%"}
### Computing
$$
@@ -888,8 +854,8 @@ d_m =& |x_A - x_B| + |y_A - y_B| \\
d_c =& \max(|x_A - x_B| , |y_A - y_B|) \\
\end{align}
$$
</div>
:::
:::
## Generalizable on a n-dimension space {.smaller}
@@ -904,7 +870,6 @@ $$
with $a_i$ and $b_i$ being respectively the value of the $i^{th}$ variable for $A$ and $B$.
$$
\begin{align}
d_e =& \sqrt{\sum_{i=1}^{n}(a_i - b_i)^2 } \\
@@ -921,20 +886,20 @@ $$
d^k = \sqrt[k]{\sum_{i=1}^n|a_i - b_i|^k}
$$
- $k=1 \Rightarrow D_m$ Manhatan distance
- $k=2 \Rightarrow D_e$ Euclidean distance
- $k=\infty \Rightarrow D_c$ Chebychev distance
- $k=1 \Rightarrow D_m$ Manhatan distance
- $k=2 \Rightarrow D_e$ Euclidean distance
- $k=\infty \Rightarrow D_c$ Chebychev distance
## Metrics and ultrametrics
## Metrics and ultrametrics
<div style="float: left; width: 50%;">
::: columns
::: {.column width="40%"}
```{r out.width = "400px"}
knitr::include_graphics("figures/ultrametric.svg")
```
</div>
<div style="float: right; width: 50%;">
:::
::: {.column width="60%"}
### Metric
$$
@@ -946,31 +911,29 @@ $$
$$
d(x,z)\leq \max(d(x,y),d(y,z))
$$
</div>
:::
:::
## Why it is nice to use metrics ? {.flexbox .vcenter}
- A metric induce a metric space
- In a metric space rotations are isometries
- This means that rotations are not changing distances between objects
- Multidimensional scaling (PCA, PCoA, CoA...) are rotations
- A metric induce a metric space
- In a metric space rotations are isometries
- This means that rotations are not changing distances between objects
- Multidimensional scaling (PCA, PCoA, CoA...) are rotations
## The data set {.flexbox .vcenter}
**We analyzed two forest sites in French Guiana**
- Mana : Soil is composed of white sands.
- Mana : Soil is composed of white sands.
- Petit Plateau : Terra firme (firm land). In the Amazon, it corresponds to the area of the forest that is not flooded during high water periods. The terra firme is characterized by old and poor soils.
- Petit Plateau : Terra firme (firm land). In the Amazon, it corresponds to the area of the forest that is not flooded during high water periods. The terra firme is characterized by old and poor soils.
**At each site, twice sixteen samples where collected over an hectar**
- Sixteen samples of soil. Each of them is constituted by a mix of five cores of 50g from the 10 first centimeters of soil covering half square meter.
- Sixteen samples of soil. Each of them is constituted by a mix of five cores of 50g from the 10 first centimeters of soil covering half square meter.
- Sixteen samples of litter. Each of them is constituted by the total litter collecter over the same half square meter where soil was sampled
- Sixteen samples of litter. Each of them is constituted by the total litter collecter over the same half square meter where soil was sampled
```{r echo=TRUE}
data("guiana.count")
@@ -978,7 +941,6 @@ data("guiana.motus")
data("guiana.samples")
```
## Clean out bad PCR cycle 1 {.flexbox .vcenter .smaller}
```{r echo=TRUE,fig.height=2.5}
@@ -986,6 +948,7 @@ s = tag_bad_pcr(guiana.samples$sample,guiana.count)
guiana.count.clean = guiana.count[s$keep,]
guiana.samples.clean = guiana.samples[s$keep,]
```
```{r echo=TRUE}
table(s$keep)
```
@@ -1017,7 +980,7 @@ table(s$keep)
## Averaging good PCR replicates (1) {.flexbox .vcenter}
```{r echo=TRUE}
guiana.samples.clean = cbind(guiana.samples.clean,s)
guiana.samples.clean = cbind(guiana.samples.clean,s[rownames(guiana.samples.clean),])
guiana.count.mean = aggregate(decostand(guiana.count.clean,method = "total"),
by = list(guiana.samples.clean$sample),
@@ -1075,18 +1038,20 @@ xy = xy[,1:2]
xy.hellinger = decostand(xy,method = "hellinger")
```
<div style="float: left; width: 50%;">
::: columns
::: {.column width="40%"}
```{r, fig.width=4,fig.height=4}
par(bg=NA)
plot(xy.hellinger,asp=1)
```
</div>
<div style="float: right; width: 50%;">
:::
::: {.column width="60%"}
```{r out.width = "400px"}
knitr::include_graphics("figures/euclidean_hellinger.svg")
```
</div>
:::
:::
## Bray-Curtis distance on relative frequencies
@@ -1167,13 +1132,13 @@ plot(guiana.jac.50.pcoa$points[,1:2],
```
## Principale composante analysis {.flexbox .vcenter}
## Principale composante analysis {.flexbox .vcenter}
```{r echo=TRUE}
guiana.hellinger.pca = prcomp(guiana.hellinger.final,center = TRUE, scale. = FALSE)
```
```{r fig.height=4,fig.width=12}
```{r fig.height=4,fig.width=12}
par(mfrow=c(1,3),bg=NA)
plot(guiana.euc.pcoa$points[,1:2],
col = samples.type,
@@ -1191,6 +1156,7 @@ plot(0,type='n',axes=FALSE,ann=FALSE)
legend("topleft",legend = levels(samples.type),fill = 1:4,cex=1.2)
```
````{=html}
<!---
## Computation of norms
@@ -1242,6 +1208,7 @@ plot(-guiana.n4.pcoa$points[,1],-guiana.n4.pcoa$points[,2],
```
--->
````
## Comparing diversity of the environments
@@ -1278,7 +1245,4 @@ boxplot(t(guiana.relfreq.final[,samples.type=="soil.Petit Plateau"]),log="y",
names=qs,las=2,col=4,add=TRUE)
```
## Bibliography

View File

@@ -0,0 +1,17 @@
knitr
tidyverse
ggplot2
tibble
tidyr
readr
purrr
dplyr
stringr
forcats
lubridate
kableExtra
latex2exp
MetabarSchool
permute
lattice
vegan

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Some files were not shown because too many files have changed in this diff Show More