## Lessons learnt from lecturing

### Featured

Starting the new year and hoping the pandemic would be soon over turned out to be an illusion. In many countries Instead of efficient vaccinations we had to face a series of new lockdowns and severe restrictions. New virus variants seem to promise even more of this whilst threatening and interrupting lives. Covid-19, of course, also affected teaching and in February I started taping my annual lectures in Individual-based Forest Ecology and Management as part of our MSc in Forest Ecology and Sustainable Forest Management. Since teaching had to be remote anyway I thought it would be appropriate to put the flipped-classroom teaching mode that I have heard of so much in recent years to a test. This I did and the students could follow my lectures in their own time. I also provided videos of my R tutorials. At regular times I then invited students to Zoom meetings for two hours at a time, so that they could ask questions and we could have a discussion. This went better than I had anticipated, however, the students were quite shy and it took a while to get them talking.

Since methods of individual-based forest management form an important part of Continuous Cover Forestry (CCF; sometimes also termed Near-Natural Forestry), I also had to say a few words on CCF and like last year I was overwhelmed by the positive response of the students. Not only students from outside Sweden were extremely interested in CCF, but also our Swedish MSc students. This is encouraging, since CCF is thought of as an important instrument for mitigating climate change and the Swedish forest industry has so far not much subscribed to alternatives to clearfelling.

Two interesting things came up in the quantitative realm of the lectures. When introducing the aggregation index by Clark and Evans (1954) I remembered that the R spatstat package (Baddeley et al., 2016) offers the function clickppp() which allows the user to determine their own point pattern by mouse clicking. Using this function turned out to be quite a bit of fun, as you can carry out your own little experiments. For example, you can ask your friends and colleagues to try aiming at a random pattern. If carried out correctly, the spatstat function relating to the Clark and Evans index should return a value near 1.

```library(spatstat)
# Go for your own experiment
myDataP <- clickppp(win = owin(c(0, 50), c(0, 50)))
# Using spatstat
clarkevans(myDataP)```

You can first try this yourself and you will most likely discover how hard it is to produce a random point pattern. Although we really try hard to randomly place points, we rarely succeed. You can then go on and other people of different age, gender, cultural background etc. to see, of anyone or any group of people is less biased than you are yourself. This could even become a nice exercise in citizen science similar to the counting of singing-bird species in people’s gardens on particular days of the year. What we currently know about such simple experiments puts a question mark over all technical instructions where we are supposed to pick or place something at random. Obviously it seems that we are not able to do that. And yet, often surveys and other activities actually rely on this randomness.

The other interesting matter related to the idea of a student to model silvicultural systems using spatial statistics. The young lady in question came across the lines in my textbook on page 90 where I discussed the group shelterwood system and mentioned that in uplands often elliptical shapes are preferred for opening the main forest canopy to better take into account the light regime and the snow conditions in such topography. She developed this idea of simulating the locations of trees of the forest stand by using a Poisson point process model (to produce random tree locations again). In a second step an ellipse would be placed in the observation window and all tree locations inside the ellipse would be removed. Here is the code:

```library(spatstat)
library(plotrix)

insideEllipse <- function(ex, ey, rx, ry, xvector, yvector) {
return((xvector - ex)^2 / rx^2 + (yvector - ey)^2 / ry^2)
}

set.seed(round(runif(1, min = 1, max = 10000))) # Ramdom starting point
xmax <- 100 # Observation window defined by xmax and ymax
ymax <- 100
myLambda <- 0.05 # Point density
pattern <- rpoispp(lambda = myLambda, win = owin(c(0, xmax), c(0, ymax)), nsim = 1) # Poisson process
myX <- pattern\$x # Save x and y in separate vectors
myY <- pattern\$y
ellipseX <- runif(1, min = 0, max = xmax) # Define random ellipse centre
ellipseY <- runif(1, min = 0, max = ymax)
rx <- 20 # Define semi-major axis
ry <- 30 # Define semi-minor axis
thin <- insideEllipse(ellipseX, ellipseY, rx, ry, myX, myY) <= 1 # Determine points to delete
myX <- myX[!thin] # Retain points not earmarked for deletion
myY <- myY[!thin]
pattern <- ppp(myX, myY, xrange = c(0, xmax), yrange = c(0, ymax)) # Define new ppp after thinning
plot(pattern) # Plot the point pattern
draw.ellipse(ellipseX, ellipseY, a = rx, b = ry, angle = c(0), border = "red")```

You can run the code several times and will find that both the surrounding point pattern and the ellipses change from run to run. Another extension would be to add a random angle so that the semi-major axis of the ellipse would not run in parallel with one of the axes of the system of coordinates. Such code provides a better understanding of what ellipse cuttings as part of shelterwood systems imply and allow their simulations in growth projection models.

These were nice examples of students actively engaging in the course. I look forward to what we will discover together next year.

# Quantifying non-spatial species diversity

Species diversity, a combination of species richness and relative abundance (Newton, 2007), is not the only aspect of biodiversity, but a rather important and the most commonly considered one (Kimmins, 2004). In the past, most importance has been assigned to species diversity and there is a wide range of approaches to quantifying this aspect of diversity.

In the light of large-scale forest destruction and associated species loss, which may dramatically increase as a result of climate change and of the growth of the human population, the monitoring of species diversity assumes quite some importance. Forest management and conservation have a particular influence on and responsibility for species diversity. In this context, the dynamics of forest succession and the effect of disturbances are crucial aspects of diversity research.

Species diversity is usually defined in terms of species richness and abundance. Species richness is simply the number of species whilst abundance is a density measure, i.e. the number of organisms of a certain species per space unit. Abundance can be expressed in absolute and in relative terms (Gaston and Spicer, 2004).

Species richness is often interpreted as a surrogate measure for other kinds of biodiversity: More species usually lead to greater genetic variation, i.e. there is a greater diversity of genes in the population. This implies greater ecological variation and a better exploitation of niches and habitats (Magurran, 2004; Gaston and Spicer, 2004; Krebs, 1999}.

A species diversity index is a mathematical expression of species diversity in a community and provides important information on the occurrence and distribution of species in a community (Krebs, 1999). Popular examples of typical species diversity measures include the Shannon and Simpson indices. Both indices take the relative abundance of different species into account rather than simply expressing species richness. The Shannon index (see Eq. (1); Shannon and Weaver, 1949) is an information theory index and was originally proposed to quantify the entropy, i.e. the uncertainty of information in strings of text (Krebs, 1999). It measured the uncertainty of the next letter in a coded message or the next species to be found in a community. A monospecies forest would have no uncertainty and H‘ = 0. The Shannon index is affected by both the number of species and their equitability or evenness. By contrast, the Simpson index (see Eq. (2); Simpson, 1949) is a dominance or concentration measure. The evenness forms are often used as standardisation to allow for comparisons between different monitoring sites (Pretzsch, 2009) and is constrained between 0 and 1.

(1) $Latex formula$

pi is the proportion of individuals found in the ith species and s is the number of species. There are different variants of the Simpson index and this is the version suggested by \(Magurran, 2004, p. 116). It uses the reciprocal as opposed to the complement form for calculating the evenness measure. Whilst the Shannon measure emphasizes the species richness component of diversity the Simpson index is weighted by abundances of the commonest species. The Shannon and Simpson measures are among the most meaningful and robust diversity measures available (Krebs, 1999; Magurran, 2004).

Both species diversity measures can be calculated based on tree number proportions and basal area proportions. Basal area, g (measured in m2), is the cross-sectional surface area of tree stems at 1.3 m above ground level. In the case of basal area proportions the conspecific tree sizes are taken into account and not only their numbers.

The Shannon index is easy to compute in R. In lines 1 and 3 the total species specific number of trees and basal areas are determined. The Shannon index is calculated in lines 4f. and 6f. for tree number and basal area proportions, respectively. The corresponding evenness measures are calculated in lines 8 and 9. We used the natural logarithm here and it is also common to apply the binary logarithm.

```sph <- tapply(myData\$treeno, myData\$species, length)
myData\$ba <- pi * (myData\$dbh / 200)^2
ba <- tapply(myData\$ba, myData\$species, sum)
(ShannonStems <- sum(stems.species / sum(stems.species) *
(-log(stems.species / sum(stems.species))))
(ShannonBasalArea <- sum(basalArea.species /
sum(basalArea.species) * (-log(basalArea.species /
sum(basalArea.species))))
ShannonStems / log(numberSpecies)
ShannonBasalArea / log(numberSpecies)```

The index by Simpson (1949) gives a probability of any two individuals drawn at random from an infinitely large population belonging to different species. Simpson suggested that this probability was inversely related to diversity.

(2) $Latex formula$

The R code for the Simpson index follows largely the same structure as the one above for the Shannon index. Also here the index is calculated both for tree number and basal area proportions.

```(SimpsonStems <- 1 / sum((stems.species /
sum(stems.species))^2))
(SimpsonBasalArea <- 1 / sum((basalArea.species /
sum(basalArea.species))^2))
SimpsonStems / numberSpecies
SimpsonBasalArea / numberSpecies```

Other species diversity indices are described in detail in Krebs (1999), Magurran (2004) and in Staudhammer and LeMay (2001).

Literature

Gaston, K. J. and Spicer, J. I., 2004. Biodiversity. An introduction. Blackwell Publishing. Oxford, 191p.

Kimmins, J. P., 2004. Forest ecology – a foundation for sustainable management. 3rd edition. Pearson Education Prentice Hall. Upper Saddle River, NJ. 700p.

Krebs, C. J., 1999. Ecological methodology. 2nd edition. Addison Wesley Longman. New York, 620p.

Magurran, A. E., 2004. Measuring biological diversity. Blackwell Publishing. Oxford, 256p.

Newton, A. C., 2009. Forest ecology and conservation. A handbook of techniques. Oxford University Press. Oxford, 454p.

Pretzsch, H., 2009. Forest dynamics, growth and yield. From measurement to model. Springer, Heidelberg, 664p.

Shannon, C. E. and Weaver, W., 1949. The mathematical theory of communication. University of Illinois Press. Urbana, 35p.

Staudhammer, C. L. and LeMay, V. M., 2001. Introduction and evaluation of possible indices of stand structural diversity. Canadian Journal of Forest Research 31: 1105-1115.

Simpson, E. H., 1949. Measurement of diversity. Nature 163: 688.

# Research in the time of corona

What a time? Who would have assumed such an unprecedented global disaster? I just hope that as many readers of this blog as possible have remained unaffected by the virus and can continue now in good health.

I was caught by the German lockdown in mid March when re-uniting with my family near Göttingen. Luckily I had just finished all my teaching a month earlier. Of course, all my conferences and scientific visits got cancelled as they were for most of us. On my way between Umeå and Frankfurt I travelled through ghost airports that were only marginally functioning… Glad that I made it in the end.

What was anticipated to be a month in Germany has become three months now, more or less in total isolation from people other than family. It has been a great time for the family to live through this crisis together, of course, and we have taken much comfort from being together.

Surely everybody felt the strange feeling of working from home without much external contact and the confinement of limited living and working space. It was nice to see colleagues from work in Zoom from time to time and to catch up, but otherwise I felt a bit like a monk working quietly in his cell and not to go much beyond the boundaries of the property. When things relaxed a bit, wearing masks in public has become a common habit now and we almost don’t even notice them any more.

Despite many lives lost everywhere, it has certainly been a convenient time to sit down and think about research and to finish some long-standing projects. Luckily research in forest biometrics and quantitative ecology is quite crisis proof: If you have the data, all you need is one or two computers. So I upgraded my home office a bit and simply cracked on.

I came to Germany with an almost finished research project on the spatial correlation between tree species and tree size diversity in highly diverse Chinese temperate woodlands. This topic has intrigued me for quite a while, as I believe the spatial correlation between tree species and tree size diversity is part of nature’s mechanism to maintain high levels of biodiversity. Understanding more about this relationship will allow us to mimic the natural maintenance of biodiversity and this is crucial to our efforts to stem the tide of worldwide biodiversity loss. I collaborated here with Gongqiao Zhang and Xiaohong Zhang from the Chinese Academy of Forestry.

Next I worked on a new principle of quantifying nearest-neighbour size inequality. The idea for this work spontaneously came to me when reviewing methods of modelling asymmetric growth in individual-based models. One of these methods used trigonometric functions and related to another method that was suggested by Oscar García in 2014. When I remembered this, I reached out to Oscar in Chile and we had a good email discussion on this new index, which I much enjoyed. Together with Janusz Szmyt from Poznan University of Life Sciences and Gongqiao Zhang from the Chinese Academy of Forestry we found that the new nearest-neighbour characteristic is a good indicator of spatial size inequality but is also highly correlated with the growth of the subject tree.

Then I considered the intriguing problem of applying modified approval voting to situations where a number of test persons are asked to mark trees for eventual eviction on a sheet of paper or on a tablet computer. This is a situation common in marteloscope research, where all trees of a forest stand are numbered. I reached out to colleagues at Technische Universität Berlin, Markus Brill and his team, and we figured that part of their voting research can be used to work out how a representative list of trees can be calculated as a synthesis of individual marks of a number of test persons. This is a new approach to crowdsourcing with a view to invite experts in a certain environmental field and synthesising their expert knowledge in a representative way through modified approval voting. Such quantitative croudsourcing is very useful whenever new ways in forest management need to be followed and no best-practice guidelines are available. We applied the novel method to 50 marteloscope data sets kindly provided by Jens Haufe from the Technical Development Department of Forest Research at Ae (Scotland, UK).

I much enjoy this interdisciplinary and international work and I am glad that through digitalisation it is still possible to reach out and collaborate despite Corona. There is so much to gain from this kind of cooperation. Somehow research has carried me through this time of change. Let’s just hope that things will have improved over the summer so that we can meet on campus again.

# Weibull distribution for characterising stem-diameter structure

The Weibull density distribution is known as

$Latex formula$,

where $Latex formula$ is the location, $Latex formula$ the scale and $Latex formula$ is the shape parameter, i.e. the parameters of the Weibull distribution are interpretable, which is always a good property of models.
The cumulative distribution function, i.e. the integral of density function, is much simpler:

$Latex formula$

What does it mean?

The Weibull density distribution allows characterising tree stem-diameter distributions by providing trend curves but more importantly by summarising stem diameters by means of three parameters that can be interpreted.

The shape parameter of the Weibull distribution, $Latex formula$ is of particular interest in this context and the following interpretation aid can be used (Burkhart and Tomé, 2012, p. 198):

$Latex formula$ Skewed to the right
$Latex formula$ Skewed to the left
$Latex formula$ Negative exponential, reverse j-shaped

When $Latex formula$ is less than 1, the distribution is reverse j-shaped found in uneven-aged forest stands and when $Latex formula$ equals 1, a negative exponential distribution results. If $Latex formula$, the Weibull distribution approximates a normal distribution and this value divides left- and right-skewed curves. In general, $Latex formula$ gives bell-shapes typical of even-aged forest stands. The location parameter is directly related to the minimum diameter in a stand (Burkhart and Tomé, 2012, p. 265 ). In the context of diameter distributions, all model parameters must be positive.

Where does it come from?

The Weibull distribution is named after Swedish mathematician Waloddi Weibull, who described it in detail in 1951, although it was first identified by Fréchet (1927) and first applied by Rosin and Rammler (1933) to describe a particle size distribution.

Why is it important?

The Weibull distribution is one of the most flexible and most commonly applied model for tree stem-diameters. The model parameters are comparatively easy to estimate and the distribution is easy to apply.

How can it be estimated?

Robinson and Hamann (2010, p. 164ff.) described in detail how the three parameters of the Weibull distribution can be estimated using R and the maximum-likelihood method. The grey box below gives an adaptation of that method:

```dweibull3 <- function(x, gamma, beta, alpha) {
(gamma/beta) * ((x -alpha)/beta)^(gamma - 1) * (exp(-((x - alpha)/beta)^gamma))
}

loss.w3 <- function(p, data)
sum(log(dweibull3(data, p[1], p[2], p[3])))

mle.w3.nm <- optim(c(gamma = 1, beta = 5, alpha = 10),
loss.w3, data = myData\$dbh, hessian = TRUE,
control = list(fnscale = -1))

mle.w3.nm\$par # model parameters

# Check whether the curve looks OK
xx <- seq(10, 60, 1)
hist(myData\$dbh, freq = F, breaks = 50, xlim = c(10, 60))
lines(xx, dweibull3(xx, mle.w3.nm\$par[1], mle.w3.nm\$par[2],
mle.w3.nm\$par[3]), lty = 1, col = "red")```

Since R only provides implementations for the two-parameter version, the code starts with a new function dweibull3() implementing the three-parameter version. This is followed by a maximum-likelihood loss function using the previously defined function dweibull3(). This loss function in turn now forms one of the arguments of the optim() function used for carrying out the regression. myData is a data frame that includes a vector of stem diameters that can be addressed by myData\$dbh.

An alternative to nonlinear maximum-likelihood based regression are percentile estimations. The idea of this approach is to estimate the three parameters of the Weibull distribution from selected points of the distribution, e.g. the 63rd or 95th percentile. The theory of percentile estimators is well explained in Clutter et al. (1983, p. 127ff.). However, percentile estimations can be very valuable where maximum-likelihood regression does not produce any or unsuitable results. Some methods of percentile estimation have also been linked to straightforward sampling methods, so that the parameters of the Weibull function can almost be directly sampled in the field without much effort. Also, percentile methods can be used to identify starting values for nonlinear regression. Below one example method of percentile estimations is provided.

According to Wenk et al. (1990, p. 198f.) and Burkhart and Tomé (2012, p. 265)

$Latex formula$.

$Latex formula$ and can be interpreted as the diameter where approximately 63% of all trees are smaller in diameter. $Latex formula$ , i.e. the minimum diameter in a tree population or forest stand.

Finally, Gerold(1988) suggested that $Latex formula$ can be estimated from $Latex formula$ and the diameter where approximately 95\% of all trees are smaller in diameter.

$Latex formula$

$Latex formula$, $Latex formula$ and $Latex formula$ can be estimated from any empirical diameter distribution but also by employing a simple sampling procedure. Based on a systematic sampling grid approximately ten sample points need to be identified in every forest stand along with the first twelve tree neighbours nearest to each sample point. Römisch (1983) found that $Latex formula$ can be estimated from the diameter of the fifth largest tree out of twelve sample trees and that $Latex formula$ can be estimated from the largest diameter tree out of ten sample trees nearest to the sample point. $Latex formula$ and $Latex formula$ are then calculated as the arithmetic means of all ten samples. $Latex formula$ is the smallest diameter of all sample trees (Gerold, 1988).

Literature

Burkhart, H. and Tomé, M., 2012. Modeling forest trees and stands. Springer, Dordrecht.

Clutter, J. L, Fortson, J. C., Pienaar, L. V., Brister, G. H. and Bailey, R. L., 1983. Timber management. A quantitative approach. John Wiley & Sons, New York.

Fréchet, M., 1927. Sur la loi de probabilité de l’écart maximum. Annales de la Société Polonaise de Mathematique 6: 93-116.

Gerold, D., 1988. Describing stem diameter structure and its development by using the Weibull distribution. Wissenschaftliche Zeitschrift der Technischen Universität Dresden 37: 221-224.

Rosin, P. and Rammler, E., 1933. The laws governing the fineness of powdered Coal. Journal of the Institute of Fuel 7: 29–36.

Robinson , A. P. and Hamann , J. D., 2010. Forest analytics with R. An introduction. Use R! Springer. New York, 339p.

Römisch, K., 1983. A mathematical model for simulating growth and thinnings of even-aged pure stands. PhD thesis Technical University Dresden. Dresden, 197p.

Wenk, G., Antanaitis, V. and Šmelko, Š ., 1990. Forest growth and yield science. Deutscher Landwirtschaftsverlag. Berlin, 448p.

Weibull, W., 1951. A statistical distribution function of wide application. Journal of Applied Mechanics 18: 293–297.

# Visiting BOKU University in Vienna

Long had I intended to pay BOKU University in Vienna another visit and much prevented me from carrying out this plan, too many excuses that couldn’t be ignored or so at least we often think by ourselves. To me BOKU University Vienna – officially the University of Natural Resources and Life Sciences, Vienna, is a place of many friends: Here I found many people I collaborated with over the years, I found mentors and students. Many of them visited me at the various places I have worked in Europe and finally I decided to apply for my habilitation at BOKU, which was awarded to me in 2009. The fact that one of my friends here, Hubert Hasenauer, recently has become the vice-chancellor of BOKU, was another incentive to return. At the same time it is is nice to contribute to increasing the cooperation between BOKU and SLU.

BOKU is a truly lovely place, not too far from Vienna city centre but placed in a slightly quieter part of the city next to a large and very attractive park which seems like a natural extension of the campus. The representative facade of the Wilhelm-Exner building at 82 Peter-Jordan Street offers a special welcome in style. The onset of lovely autumn colours in adjacent vineyards is a particular treat and consuming moderate amounts of Heurigen vine clearly fuels scientific inspiration. It was wonderful that my BOKU mentor and friend Hubert Sterba took me to one of Vienna’s Heurigen restaurants the other day.

Manfred Lexer, acting head of the Institute of Silviculture, and I submitted a guest-professorship proposal to BOKU’s Senate last year and after its approval in December 2018 planned this visit, which I unfortunately had to postpone by one term, as I needed the time in the first half of the year to finish my textbook. At the same time Hubert Hasenauer kindly supported my visit, as we had agreed to meet up again a long time ago.In the end things worked out very well, since I could prepare my course in “Individual-based forest ecology and management” in good time and the accompanying textbook had just appeared in print on time.

The Institute of Silviculture has a long tradition in eco-physiological gap modelling (such as the PICUS model) but also with statistical individual-oriented modelling, such as the MOSES model. During my visit I am attempting to add to this rich modelling expertise my experience in point-process inspired individual-based modelling. Currently I am using some of Hubert Hasenauer’s Norway spruce – Scots pine data that he used for his MOSES model to develop a new, advanced interaction-field-based model for mixed-species forests. The model will be based on relative growth rates and attempts to better describe the simultaneous allocation of biomass to height and stem-diameter growth.

At the same time I am cooperating with Xiaohong Zhang from the Chinese Academy of Forestry who currently spends a year at BOKU’s Institute of Silviculture and is attending my course in “Individual-based forest ecology and management”. Together we are investigating the interaction between Quercus mongolica and Pinus koraiensis in semi-natural woodlands in China’s Jilin province.

On Tuesday, 22 October, I will give a scientific talk on “Understanding forest development through interaction: Individual-based models in forestry” as part of the Institute’s Science Afternoon.

It is good that there is funding for such academic visits. They clearly make a big difference to the life of researchers and students. Everybody involved in them gains experience, motivation and inspiration, things that are so essential to our daily work. I wished more SLU students and researchers would use the opportunity to visit BOKU.

# Individual-based methods in forest ecology and management

It seemed many textbook authors did not engage in lengthy analyses or even did pros and cons to deliberate whether they should or should not engage in writing their book. Most authors of the better books simply felt the need, the personal urge to embark on this adventure. And an adventure it truly is. For a start you often get discouraged by your own organisations to write a book, since articles are rated more highly. I have seen this at a number of universities. This is a shame, since textbooks are the only way to show the big, holistic picture, i.e. how all the small peer-reviewed research papers relate and together make a big, intriguing story. Also, of course, textbooks are a crucial contribution to academic education and ensure that all our good ideas are passed on to the next generation of researchers and practitioners.

Looking back the writing process was really fun. It was an eye-opener also for us authors. Better than ever before we saw and understood how seemingly different concepts and ideas were related and how big and important the field of individual-based forest ecology and management actually is. The writing process was a true academic quest and every hour spent was worth it. The most dreary and nerve-racking time clearly was checking the proofs this summer but that was not our experience alone and is part of the process of writing a book.

Well, now here it is. One of these greenish books with a little bishop on the front cover. The text provides essential information on theories and concepts of individual-based forest ecology and management and introduces point process statistics for analysing plant interactions. This is followed by methods of spatial modelling with a focus on individual-based models. The book is complemented by key concepts of modern plant growth science. Finally new methods of measuring, analysing and modelling human interaction with trees in forest ecosystems are introduced and discussed. For better access and understanding, all methods introduced in this book are accompanied by example code ready to use in the statistical software R and by worked examples.

I hope you will enjoy it. Check it out! My first course based on the new book will be given at BOKU University, Vienna in October this year. We thank the team at Springer for the excellent cooperation and for their patience.

# Potential growth and quantile regression

What does it mean?

AGR = potential AGR $Latex formula$ modifier(s)

In many plant modelling applications growth processes are modelled in such a way that maximum growth or the growth of dominant plants is reduced by mitigating agents such as the interaction with other plants and factors of the physical environment such as light, water, temperature and nutrients. Dominant plants can for example be thought of as open-grown trees growing on their own in the open landscape without interacting with any other trees so that at least stem diameter growth can be considered as maximum. This strategy is referred to as the potential-modifier approach. (AGR is absolute growth rate, see Pommerening and Muszta, 2016.)

Where does it come from?

The potential-modifier approach was first published by Newnham (1964) and Botkin et al. (1972) as part of empirical and so-called gap models.

Why is it important?

Potential growth defines an upper limit of growth for a given species on a given site. This avoids unrealistic model estimations, as can be the case when estimating AGR directly from plant size, interaction and factors of the abiotic environment. Usually only the model parameters of the function defining potential growth change when adapting the model to new species and sites whilst the parameters relating to the modifiers stay the same. This increases the robustness of the overall growth model and allows easier adaptations.

How can it be used?

Measurements from the above-mentioned open-grown trees can be used for defining potential AGR. However, since this involves substantial additional sampling efforts, another strategy is to apply quantile regression (Koenker and Park, 1994; Cade and Noon, 2003) to the data collected for parametrisation. Quantile regression is not so different from conventional regression. Instead of fitting functions of the usual 0.5-quantile of the observed data (mean regression), larger or smaller quantiles are selected. For the application to the potential-modifier approach usually upper quantiles of 0.95 or 0.975 are used.

For example, assuming a dependency on size tree stem-diameter, AGR can be described using the first derivative of the Chapman-Richards growth function (Pienaar and Turnbull, 1973; see also my earlier blog on this function):

$Latex formula$,

where y in this case is tree stem diameter but otherwise could be any plant size characteristic. A, k and p are model parameters.

R code

In R, it is quite straightforward to apply quantile regression thanks to the quantreg package. In preparation AGR and size data should be compiled in a common data frame. Then we install and load the quantreg package:

`install.packages("quantreg", dep = T)library(quantreg)`

Next we define the AGR function to be used for describing potential growth and this we again use the aforementioned first derivative of the Chapman-Richards growth function, where time is exchanged for size (stem diameter dbh in this case):

`dpot <- function(dbh, xA, xk, xp) {                   return(xA *  xk * xp * exp(-xk * dbh) * (1 - exp(-xk * dbh)) ^      (xp - 1)) }`

Finally we can use the quantile regression routine, which in turn uses function dpot:

`nlsout <- nlrq(AGR ~ dpot(dbh, A, k, p), data = TreeList, start = list(A = 54.1, k = 0.01, p = 1.19), tau = 0.975, trace = TRUE)`

From the syntax we can see that the difference to common regression procedures such as nls is minimal. The main difference is parameter $Latex formula$ which defines the quantile. A model summary can be obtained in the same way as from nls which also allows retrieving the model parameters:

`summary(nlsout)A <- summary(nlsout)\$coefficients[1]                         k <- summary(nlsout)\$coefficients[2] p <- summary(nlsout)\$coefficients[3] `

And here is an example result:

Cade and Noon (2003) noted that quantile regression also has a general place in data analysis providing a more complete view of possible causal relationships between variables in ecological processes, where mean regression techniques would fail to identify relationships between explanatory and response variables. The additional advantage is that one can directly estimate rate parameters for changes in the quantiles of the distribution responses conditional on the predictor variables.

Literature

Botkin, D. B., Janak, J. F., Wallis, J. R., 1972. Some ecological consequences of a computer model of forest growth. The Journal of Ecology 60: 849.

Cade, B. S., Noon, B. R., 2003. A gentle introduction to quantile regression for ecologists. Frontiers in Ecology and the Environment 1: 412-420.

Koenker, R., Park, B. J., 1994. An interior point algorithm for nonlinear quantile regression. Journal of Econometrics 71: 265-283.

Pienaar, L. V. and Turnbull, K. J., 1973. The Chapman-Richards generalization of von Bertalanffy’s growth model for basal area growth and yield in even-aged stands. Forest Science 19: 2-22.

Pommerening, A. and Muszta, A., 2016. Relative plant growth revisited: Towards a mathematical standardisation of separate approaches. Ecological Modelling 320: 383-392.

Newnham, R. M., 1964. The development of a stand model for Douglas fir. Ph.D thesis, University of British Columbia, Vancouver, 201p.

# Forest biometrics – what lies ahead?

The emancipation of forest science indeed took a long time. Forest academies starting off as teaching institutions for forestry staff more than 200 years ago carried out limited research to support state and private forest management. The spirit of this set up continued until recently with teaching that was more practical than in other fields of natural sciences and research that was predominantly industry oriented and dominated by forest management and planning. This fundamentally changed in most western countries towards the 1990s when forest science started to be considered and reviewed like any other subject area in natural and social sciences. This development is still ongoing in Europe and throughout the world. It has led to a fundamental change in subject areas within forest science: Academic fields such as soil science, genetics and plant physiology that until then played a modest role went right to the top whilst formerly dominating fields such as forest management lost much of their importance. Some academic fields were abandoned. At other places forest science was shut down at respective universities or merged with other fields beyond recognition. Whether all of these changes were to the better, is another matter …

Forest biometrics is a newcomer in forest science. It modestly started off as forest mathematics at different places in Europe and North America some 100-150 years ago for teaching a minimum of essential quantitative skills. As many academic fields initially it was supposed to fill a support role for the engineering parts of forest science teaching and research. However, since the 1960s/70s this view started to become increasingly outdated and the subject area successfully established itself at most universities as a research field in its own right in the same way as other disciplines.

Some chairs in forest biometrics have since then specialised on forest growth and yield modelling or general statistics. With some notable exceptions this is typically the case in North America. In Europe, however, there is an increasing trend for forest biometrics to play an important role in quantitative ecology and ecosystem modelling. This niche is also partly occupied by other research institutions outside forest science and the collaboration with these has been very fruitful and inspiring.

The Forest Faculty at SLU was recently ranked as a leading international forest science institution. As a professor representing SLU this was really nice to read. The Faculty has surely deserved this for all their hard work and at the same time ours is one of the last few forest science only faculties in Europe. In 2024 SLU will host the IUFRO world congress at Stockholm, the birthplace of the SLU Faculty of Forest Sciences and next year we will launch a new MSc degree in Forest Ecology and Sustainable Forest Management.

Along with other academic fields forest biometrics has an important role to play to help the Faculty to maintain this position in the world. To strengthen forest biometrics and in recognition of the achievements of the current chair, the professorship in forest biometrics at SLU recently moved from the Department of Forest Resource Management to the Department of Forest Ecology and Management.  The chair has received a warm welcome at the new department and I am most grateful to my new colleagues. I will continue my research in individual-based methods of forest ecology involving point process statistics, individual-based modelling, tree growth analysis and the analysis of interaction between humans and trees. The organisational change reflects current trends elsewhere in Europe and allows a better integration of quantitative ecology in the wider academic field of forest ecology. SLU therefore have made an important decision towards shaping the future of forest science and maintaining its mission in Sweden, Europe and the world.

# Modelling size distributions

Analysing and studying size distributions has been important to population ecology and beyond for a long time. Empirical size distributions give important clues about current size structure of a given population and often even allow conclusions about the prevailing ecological processes. Negative exponential stem-diameter distributions are, for example, often associated with forests that are exposed to some level of disturbance and bell-shaped stem-diameter distributions can often be found in forest plantations. Therefore computing size distributions is often one of the first data-exploration tasks in ecological studies as well as in studies related to forest management. Other important contexts include:

• Sustainability of timber resources in small-scaled forest ownership,
• Silvicultural controlling,
• Decision support for forest management,
• Training and education,
• Maintaining a certain forest structure for recreational forests,
• Important starting point for analyses in point process statistics.

Empirical size distributions can be produced using histograms and bar plots. These approximate the probability density function, which can be modelled using for example the Weibull distribution model but also the beta, gamma or normal distributions. These are parametric models and there is also the possibility to produce non-parametric trend curves based on kernels. In that case no particular model assumptions are made and applying non-parametric trend curves is a good strategy for initial data exploration to prepare the selection of suitable parametric models.

All of these methods and models have in common that they provide static information about size distributions. However, size distributions are dynamic and we are often interested in how these size distributions change with time. Of course, it is possible to model populations bottom-up at the level of individuals and then to summarise each simulation step by size distributions. Often the dynamics of the size distribution itself is of interest, researchers and stakeholders are, for example, interested in the demographic processes of certain size variables. This research approach is then closely related to other demographic studies, e.g. in animal or human populations. Typical processes involved in a dynamic size distribution are shown in the schematic graph below.

Back in the early 2000s silviculturists in the UK were very interested in these dynamic size distributions for deriving thinning guides for Continuous Cover woodlands. I remember well that I first started to discuss them in greater detail with Hubert Sterba (BOKU University, Vienna), when he first visitied me at Bangor University in 2002. In 2012 I had the opportunity to exchange views with Jean-Philippe Schütz (ETH, Zürich) whilst working in Switzerland.

In Schütz and Pommerening (2013) we used such a demographic approach to model the equilibrium conditions for a Douglas fir single-tree selection system in North Wales. The equilibrium conditions were identified in this work to inform and guide forest management. The Douglas-fir plots in Artist’s Wood (Gwydyr Forest) in North Wales have always been my favourites during my time at Bangor University and Jean-Philippe and I intended to provide evidence that single tree selection systems are possible with Douglas fir. The modelling approach was based on Schütz (2006). Brzeziecki et al. (2016) refined this approach and applied it to natural stands in Bialowieza Forest, where no forest management takes place. Here our research objective was to model equilibrium size distributions for the main tree species, in order to find out about future species composition trends as a contribution to current discussions about this important virgin forest.

The modelling approach is simple and effective. As an important component it is (1) necessary to model the outgrowth rate $Latex formula$ for each diameter class $Latex formula$. This is a function of (2) size-dependent absolute growth rate $Latex formula$, which needs to be modelled in the next step. Finally (3) size-dependent mortality rate $Latex formula$ has to be determined.

Using these “ingredients” the number of trees expected at steady state can be calculated for every size class starting from a known value of $Latex formula$:

$Latex formula$

Here $Latex formula$  and $Latex formula$. Using such a dynamic size-class model many questions in population ecology and forest management can be answered.

There have been many alternative models such as the $Latex formula$-factor model and its derivatives (see for example Cancino and Gadow, 2002), but they have been found to be too inflexible and not based on real, observed growth and mortality processes.

Literature

Brzeziecki, B., Pommerening, A., Miścicki, S., Drozdowski, S. and Żybura, H., 2016. A common lack of demographic equilibrium among tree species in Białowieża National Park (NE Poland): evidence from long-term plots. Journal of Vegetation Science 27, 460-469.

Cancino, J. and Gadow, K. v., 2002. Stem number guide curves for uneven-aged forests development and limitations. In: Gadow, K. v., Nagel, J. and Saborowski, J. (eds.), 2002. Continuous cover forestry. Assessment, analysis scenario. Kluwer Academic Publishers. Dordrecht, pp. 163-174.

Schütz, J.P., 2006. Modelling the demographic sustainability of pure beech plenter forests in eastern Germany. Annals of Forest Science 63, 93-100.

Schütz, J.P. and Pommerening, A., 2013. Can Douglas fir (Pseudotsuga menziesii (Mirb.) Franco) sustainably grow in complex forest structures? Forest Ecology and Management303, 175-183.

# Reconstruction of spatial woodland structure

Reconstruction is commonly understood as the process and the result of re-establishing something that (at least partially) does not exist any longer or of re-establishing the unknown. There are a range of established reconstruction techniques for example in archaeology, in forensics (e.g. face recognition), in medicine (e.g. implants) and computing (data reconstruction). Also statistical imputation, i.e. the process of estimating missing observations and spatial interpolation in geostatistics are related to reconstruction. Without doubt reconstruction is a fascinating research field with many different applications.

Another important, more ecological application is the reconstruction of spatial forest structure. All data, that are collected in forest ecosystems, have a temporal as well as a spatial dimensions. The properties of the whole forest ecosystem, e.g. wood production, habitat and recreational values, to a large degree depend on the underlying ecosystem structure, particularly on its microstructure. This microstructure is typically shaped by physiological and ecological interactions, but the microstructure also influences interactions. These structure-property relationships play a crucial role in providing ecosystem goods and services and in the maintenance of biodiversity. Usually data related to spatial forest structure are available only on a sample basis but subsequent research requires full information which then needs to be reconstructed.

Reconstruction can even be employed for habitat modelling: Imagine a set of summary statistics that well describe requirements of a certain, endangered animal species. These can then be used to modify an existing landscape at the computer to meet these requirements. In considering this option, we have actually moved on from reconstruction to construction, because an active change of landscape structure is modelled. Bäuerle and Nothdurft (2011) used spatial reconstruction to model habitat trees.

Another important purpose of reconstruction is the testing of competing summary characteristics. There are usually a number of statistics describing the same aspect of spatial forest structure or of a certain material. Which one is better, which one should be used in a given analysis? One way of shedding light into this is by reversing the analysis. The competing summary characteristics  are used in separate simulations for reconstructing the original data from the results provided by these summary characteristics. This synthesis of a given analysis is nothing else than reconstruction and is applied a lot in materials science.

So how does it work?

Well, it turns out (re)construction is not difficult at all. First you need one or more summary characteristics describing the structure you intend to reconstruct. This can be a simple index based on a mean, it can be a histogram or a function. You can also choose a number of different summary characteristics.

Then you select a stochastic optimisation method. For this other researchers and I have successfully used the simulated-annealing method. This method was first developed in physics, it relates to thermodynamics, particularly to how the energy state of for example metals change when they anneal. The reconstruction algorithm is based on simulated annealing. One way to start is by randomly dispersing tree locations in a given observation window. Then iteratively  one of these points is randomly selected and shifted to a new random location within the observation window. For this change the summary characteristic(s) are re-calculated and compared with a target, e.g. the same summary characteristic for some ideal of a reference. If the change leads to a better approximation of the target, it is made permanent otherwise it is rejected and the old state is restored (improvements-only algorithm). In either case, this is followed by the random selection and shift of another point. This process stops when the difference between target and observed characteristic(s) becomes very small or after a certain number of iterations (Torquato, 2002).

In previous publications I employed this method to reconstruct the structure of a whole forest stand from sample data by interpolating the unmeasured structure between the sample plots (Pommerening and Stoyan (2008). This even let to the astonishing result that in some cases certain summary characteristics were better estimated from the reconstruction than from the sample although no additional information was added. Nothdurft et al. (2010) in fact corrected a biased sampling design through spatial reconstruction. Together with Estonian colleagues we also used reconstruction for simulating off-plot edge correction buffers (Lilleleht et al., 2014). It is also possible to include existing, measured objects in the reconstruction through conditional simulation (Pommerening and Stoyan, 2008).

On my website pommerening.org you can find example code in R and C++ for construction, where you can essentially model a spatial point pattern that leads to a certain pre-set value of the aggregation index by Clark and Evans. Along similar lines it is also possible to set a certain species mingling index (see one of my previous blogs) for an existing point pattern and then to swap pairs of trees of different species. Here the point locations remain the same, only the species marks are re-allocated (see below for some results). Applications to remote-sensing and other image data are also known. There is no limit really to anybody’s resourcefulness to invent new ways of spatial reconstruction.Example of species mingling construction: A random point pattern was simulated using a Poisson process in a window of 100 x 100 m. The two species (red and yellow were randomly assigned to the points with a probability of 0.5, left). Through construction the species dispersal was optimised to achieve a mingling index J of -0.20, where among the four nearest neighbours heterospecific points attract each other (right). Note that the point locations have remained the same.

Interested? Any questions? Don’t hesitate to get in touch.

Literature

Bäuerle, H. and Nothdurft, A., 2011. Spatial modeling of habitat trees based on line transect sampling and point pattern reconstruction. Canadian Journal of Forest Research 41, 715-727.

Lilleleht, A., Sims, A. and Pommerening, A., 2014. Spatial forest structure reconstruction as a strategy for mitigating edge-bias in circular monitoring plots. Forest Ecology and Management 316, 47-53.

Nothdurft, A., Saborowski, J, Nuske R. S. and Stoyan, D., 2010. Density estimation on k-tree sampling and point pattern reconstruction. Canadian Journal of Forest Research 40, 953-967.

Pommerening, A. and Stoyan, D., 2008. Reconstructing spatial tree point patterns from nearest neighbour summary statistics measured in small subwindows. Canadian Journal of Forest Research 38, 1110–1122.

Torquato, S., 2002. Random heterogeneous materials. Interdisciplinary applied mathematics 16, Springer, New York.

Tscheschel, A. and Stoyan, D., 2006. Statistical reconstruction of random point patterns. Computational & Data Analysis 51, 859-871.