WEBVTT

00:00:00.000 --> 00:00:14.800
So if you are wondering about the acronyms of this reading concordances in the 21st century,

00:00:14.800 --> 00:00:19.960
which of course is quite a mouthful to spell out every single time, that's why we have

00:00:19.960 --> 00:00:25.400
settled on RC21. And what we want to do in this talk is to introduce you to some called

00:00:25.400 --> 00:00:32.080
strategies that can be used to analyze concordances, which you can think of as basically a corpus query

00:00:32.080 --> 00:00:38.480
result presented in a particular manner. We'll be showing you and introducing you to some

00:00:38.480 --> 00:00:46.640
concordances to read throughout this talk. And as Marianne has already introduced, this will be

00:00:46.640 --> 00:00:52.440
supported with the tool that is being developed in the course of this project. Here we'll be

00:00:52.440 --> 00:00:57.840
focusing on a running example, which is a case study from a case study in body part nouns in

00:00:57.840 --> 00:01:04.280
19th century novels. And in this particular talk, we'll be focusing on the form, the word form hands

00:01:04.280 --> 00:01:14.320
in novels by Charles Dickens. Now, before we tell you more about this, we would like you to read

00:01:14.320 --> 00:01:22.120
concordances. So if you haven't seen a concordance before, this is basically what it is. And I will ask you

00:01:22.120 --> 00:01:29.520
just to take a look at this and to see whether you can see anything about how Dickens is using the word

00:01:29.520 --> 00:01:34.280
hands in this context, if there's anything you pick up on.

00:01:44.320 --> 00:01:47.320
Would you like to connect with another person?

00:01:55.320 --> 00:02:00.320
There's really no right or wrong answers, I just want to collect a few impressions.

00:02:00.320 --> 00:02:06.320
Where the hands are, in his pocket, behind the direction.

00:02:23.320 --> 00:02:28.320
I don't know if that's what you were implying, but his and her, like the front?

00:02:30.320 --> 00:02:34.320
It's always, when it's her hands, it's always relating to him.

00:02:39.320 --> 00:02:46.320
Yes, very good. Thank you very much. We've already collected a number of things that we will come back to throughout the talk.

00:02:46.320 --> 00:02:54.320
And as I say, there's really no correct or incorrect answers here. But I hope that you get an impression.

00:02:54.320 --> 00:02:59.320
Okay, there are some things we can pick up on quite intuitively here without really having any particular

00:02:59.320 --> 00:03:05.320
background without knowing the novels necessarily. Of course, we could just go ahead and read the entire concordance.

00:03:05.320 --> 00:03:10.320
You might be wondering, what's the point? Maybe just read through the novels at that point,

00:03:10.320 --> 00:03:16.320
which of course in the case of Transdicts, we could actually do. However, there are some issues with this.

00:03:16.320 --> 00:03:25.320
So it doesn't scale well, obviously, as soon as we move to larger corpus, we actually need some other strategy to identify patterns.

00:03:25.320 --> 00:03:36.320
And also not all patterns are obvious. And what I showed you right now is also not just a random instance of lines that you happen to stumble across while opening your corpus.

00:03:36.320 --> 00:03:42.320
But it was after applying a very dedicated algorithm that we actually identify these.

00:03:42.320 --> 00:03:48.320
And even then you can see it's not it's not as immediately obvious what what we're looking for.

00:03:49.320 --> 00:03:57.320
So we actually need some strategies to go about this more systematically, which basically boiled down to rearranging the corner slides in different ways.

00:03:57.320 --> 00:04:06.320
We have an inventory of strategies shown here and we'll show you a few of them in applied examples later on.

00:04:06.320 --> 00:04:09.320
But before that, I should go into that.

00:04:09.320 --> 00:04:19.320
And in order to work with concordances, we are developing a Python library called Plexicon, which obviously comes from flexible concordancing.

00:04:19.320 --> 00:04:30.320
And we want to create a Python library that would support concordance reading and make it reproducible, accountable, transparent and flexible.

00:04:30.320 --> 00:04:43.320
You see that these epithets, they're actually very, very important for our purposes because we want to make concordance reading a science.

00:04:43.320 --> 00:04:48.320
We want to make it want to supply the researchers with proper tools.

00:04:48.320 --> 00:05:09.320
And we know that we actually are going a bit against the trend here because people nowadays are more fascinated by large language models and chat GPT where you just input your concordance and ask, ask it to analyze it and to spit out the result.

00:05:09.320 --> 00:05:12.320
No, that's not what we actually want to achieve.

00:05:12.320 --> 00:05:38.320
We want concordance reading to be a scientific procedure where every single individual step is clearly documented, where we can trace what the researchers have done with their concordance and how they have arrived at the results using understandable and clearly documented algorithms.

00:05:38.320 --> 00:05:50.320
And this is the aim of our Plexicon Python library that will be used for reading concordances.

00:05:50.320 --> 00:06:02.320
I must stop here and warn you a little bit about what our Plexicon actually is not.

00:06:02.320 --> 00:06:05.320
So it is not a corpus management tool as a whole.

00:06:05.320 --> 00:06:19.320
As our project is focused on reading concordances, we do not do many things that belong to corpus linguistics like collocation analysis or keyword analysis.

00:06:19.320 --> 00:06:22.320
This is outside the scope of our project.

00:06:22.320 --> 00:06:30.320
We are really focused on providing researchers with readable and interpretable concordances.

00:06:30.320 --> 00:06:38.320
So do not expect like collocations coming out from our tool, from our library.

00:06:38.320 --> 00:06:56.320
So all this should be done outside and also visualization and quantitative analysis is also something that is supposed to be handled outside of Plexicon by a host step that uses it as a module.

00:06:56.320 --> 00:07:01.320
And another thing I want to warn you about is that it is work in progress.

00:07:01.320 --> 00:07:10.320
So for those of you who use Python, you can see no link here on the screen.

00:07:10.320 --> 00:07:20.320
And if you try typing like something like PIP install Plexicon, you will get nothing for the time being.

00:07:20.320 --> 00:07:36.320
Even though if you are technically advanced, you will be able to get a link to the current development version from the Jupyter notebook I'm going to show a little bit later.

00:07:36.320 --> 00:07:49.320
But still, I think that the time has not yet come for this library to be distributed completely, completely, completely to everyone.

00:07:49.320 --> 00:07:57.320
So we hope we will be able to present it properly in March at our final end of project symposium.

00:07:57.320 --> 00:07:59.320
Please stay tuned.

00:08:00.320 --> 00:08:20.320
And however, today we are going to show how one can analyze concordances using Plexicon and how this actually is implemented and how this is different from concord from reading concordances in other tools.

00:08:21.320 --> 00:08:29.320
So what happens when we read concordances in Plexicon?

00:08:29.320 --> 00:08:45.320
We start with a host app, which is a corpus management tool, some tool like Corpus Workbench or Click, which you're probably acquainted with, or Sketch Engine and so forth.

00:08:45.320 --> 00:08:53.320
There is some tool, some corpus management tool, which provides a corpus and where we get a concordance from.

00:08:53.320 --> 00:08:56.320
And then we start working with it.

00:08:56.320 --> 00:09:01.320
Then we apply algorithms or algorithm combinations to this concordance.

00:09:01.320 --> 00:09:14.320
And the important thing is that when researchers read concordances, when researchers work with concordances, they try out different things.

00:09:14.320 --> 00:09:22.320
And normally when you read a research paper, you only get the fancest ones documented.

00:09:22.320 --> 00:09:36.320
But what we want to show is that actually all steps in reading concordances are important, both intermediate steps and failures.

00:09:36.320 --> 00:09:45.320
They're equally important as the resulting fancy steps where we get final results.

00:09:45.320 --> 00:09:51.320
For this reason, we store the history of algorithm application as an analysis tree.

00:09:51.320 --> 00:09:54.320
So we introduce a tree like structure.

00:09:54.320 --> 00:09:56.320
You will see it very soon.

00:09:56.320 --> 00:10:07.320
We store every single algorithm or algorithm combination that has been applied and we store their parameters also.

00:10:07.320 --> 00:10:16.320
And the results of algorithm application, so-called concordance views, can be sent to the host app to be visualized.

00:10:16.320 --> 00:10:23.320
We will develop a small demonstrator app for the time being.

00:10:23.320 --> 00:10:27.320
We are using Jupyter notebooks to visualize things.

00:10:27.320 --> 00:10:34.320
And this is something I'm going to show you now with the hands example.

00:10:34.320 --> 00:10:40.320
And if you're interested, you might want to scan this QR code.

00:10:40.320 --> 00:10:50.320
And there you will see the notebook as a whole with everything that we have done with the hands example.

00:10:50.320 --> 00:10:58.320
But please don't read it through right away because it contains spoilers.

00:10:58.320 --> 00:11:01.320
We'll know what will come out at the end.

00:11:01.320 --> 00:11:07.320
But here is actually the beginning.

00:11:07.320 --> 00:11:13.320
So we import the flexi-conc library and then we create a concordance object.

00:11:13.320 --> 00:11:19.320
Here you see this concordance object, which will be called hands nonquotes.

00:11:19.320 --> 00:11:35.320
And then we apply a function to retrieve all examples from the narrative sections of Dickens' novels from the click corpus.

00:11:35.320 --> 00:11:45.320
The narrative sections are what we call nonquotes because we assume that the language in quotation marks,

00:11:45.320 --> 00:11:51.320
so direct speech, is different as previous research has shown.

00:11:51.320 --> 00:11:58.320
And here is this concordance, which contains 2,251 lines.

00:11:58.320 --> 00:12:01.320
And then we want to have a look at it.

00:12:01.320 --> 00:12:06.320
And here is the tree.

00:12:06.320 --> 00:12:09.320
You probably can't see anything.

00:12:09.320 --> 00:12:12.320
Very small. Sorry about that.

00:12:12.320 --> 00:12:20.320
But you can see this query, which is the root of the tree.

00:12:20.320 --> 00:12:31.320
And you can see this node, which is a leaf of the tree, which has something like order, sort by corpus position.

00:12:31.320 --> 00:12:36.320
So the root node of the tree is the query hands.

00:12:36.320 --> 00:12:45.320
And then we visualize it as a concordance view where all concordance lines are ordered by their corpus position,

00:12:45.320 --> 00:12:48.320
by the position where they occur, their original corpus.

00:12:48.320 --> 00:12:56.320
This is absolutely trivial, I know, but there's something we always start with.

00:12:56.320 --> 00:12:58.320
And this is what it looks like.

00:12:58.320 --> 00:13:02.320
So you can see 12 concordance lines here.

00:13:02.320 --> 00:13:11.320
And you can see that they all come from the same novel from Bleak House because it is the first novel alphabetically.

00:13:11.320 --> 00:13:14.320
The name starts with B.

00:13:14.320 --> 00:13:20.320
So we do not want to concentrate on these examples.

00:13:20.320 --> 00:13:26.320
And the next thing we would like to do is to look at a random sample of lines.

00:13:26.320 --> 00:13:31.320
And here our tree continues growing.

00:13:31.320 --> 00:13:36.320
So we perform a selection operation.

00:13:36.320 --> 00:13:43.320
Namely, we select 30 random concordance lines to look at and to analyze them in more detail.

00:13:43.320 --> 00:13:48.320
Well, here I only have 12 of them to be shown on screen.

00:13:48.320 --> 00:13:56.320
But anyway, and then these concordance lines are ordered randomly.

00:13:56.320 --> 00:13:59.320
So this is once again a leaf node in this tree.

00:13:59.320 --> 00:14:04.320
You can see this fancy leaf icon here.

00:14:04.320 --> 00:14:09.320
For the leaf nodes, for the nodes we can actually see.

00:14:09.320 --> 00:14:14.320
And you see that these are two of the concordance reading strategies.

00:14:14.320 --> 00:14:19.320
So we select subsets of concordance lines and we reorder them.

00:14:19.320 --> 00:14:21.320
And here is a random sample.

00:14:21.320 --> 00:14:29.320
And this random sample looks much more balanced in terms of books where it comes from.

00:14:29.320 --> 00:14:36.320
And now if we try to find some patterns in there, we might want to sort it in a different way.

00:14:36.320 --> 00:14:39.320
So we add one more leaf node here.

00:14:39.320 --> 00:14:45.320
And here we sort it by the first word to the left.

00:14:45.320 --> 00:14:49.320
Both, both, her, her, her, her, his, his, his, his, his.

00:14:49.320 --> 00:14:54.320
And here comes the first observation that we can make here.

00:14:54.320 --> 00:14:59.320
That the word hands is most often preceded by a possessive pronoun.

00:14:59.320 --> 00:15:04.320
And this is something that is very clearly observable once we perform this sorting,

00:15:04.320 --> 00:15:09.320
once we create this concordance view at this leaf node in our tree.

00:15:09.320 --> 00:15:15.320
And here we can document where this finding becomes obvious.

00:15:15.320 --> 00:15:21.320
Well, as you can see here we have already five nodes in the tree.

00:15:21.320 --> 00:15:24.320
Most of them are extremely trivial.

00:15:24.320 --> 00:15:32.320
So I'm now giving the floor to Nathan who will introduce some non-trivial things to you.

00:15:32.320 --> 00:15:34.320
Thank you.

00:15:34.320 --> 00:15:44.320
So you might have been, well, as Sasha just said, we can see a few things by just sorting the entire concordance randomly.

00:15:44.320 --> 00:15:49.320
We can see some of those things similar to what you pointed out in the beginning.

00:15:49.320 --> 00:15:51.320
There are possessive pronouns happening.

00:15:51.320 --> 00:16:00.320
So it's not like it's useless to do this, but also it's kind of difficult to identify any more sophisticated patterns unless we apply a specific strategy.

00:16:00.320 --> 00:16:08.320
So the starting point noted here is basically the result of that is the concordance that we showed you right in the beginning,

00:16:08.320 --> 00:16:13.320
where all of you already picked up on a few more specific patterns.

00:16:13.320 --> 00:16:20.320
And this is because we were looking at a very carefully selected sample here, which was lines containing hands,

00:16:20.320 --> 00:16:29.320
which also contained at least four occurrences of the words listed here and who him his pockets robbing said shook with.

00:16:29.320 --> 00:16:33.320
You might be wondering why on earth, how on earth did we come up with this?

00:16:33.320 --> 00:16:45.320
Basically, we calculated the collocates of hands, but rather than calculating them on the entire corpus, we compared the narrative passages to the quotes.

00:16:45.320 --> 00:16:54.320
So basically, you can think of these words as being significantly more frequent in combination with hands in the non quotes than they are in the quotes.

00:16:54.320 --> 00:17:00.320
So these are kind of characteristics, statistically speaking, for the use of hands in this particular context.

00:17:00.320 --> 00:17:08.320
And in flexi conch terms, what we did was in order to create that concordance was to rank the lines,

00:17:08.320 --> 00:17:13.320
the overall concordance by the number of occurrences, by the number of matches for the regular expression, if you will,

00:17:13.320 --> 00:17:18.320
and select all of those that have a minimum of four occurrences.

00:17:18.320 --> 00:17:21.320
Now, this is not something we came up with for this project.

00:17:21.320 --> 00:17:22.320
There are similar algorithms around.

00:17:22.320 --> 00:17:28.320
So this particular one is basically a version of the quick group of that is implemented in click.

00:17:28.320 --> 00:17:37.320
However, in flexi conch, we are more flexible regarding the window size so we can specify what maximum distance we allow from hands.

00:17:37.320 --> 00:17:38.320
These words to occur.

00:17:38.320 --> 00:17:42.320
And I'll show you an example of that, how that can be useful later on.

00:17:42.320 --> 00:17:46.320
We can also, for instance, choose to count on types versus tokens.

00:17:47.320 --> 00:17:55.320
So do we want to count each match of rubbing as its own thing or do we say, OK, rubbing has occurred one time?

00:17:55.320 --> 00:17:58.320
Every other occurrence doesn't count here.

00:17:58.320 --> 00:18:12.320
And then as we did here, we can combine these various operations, the ranking and selecting together and then apply further algorithms to that selection to really focus in on this particular context.

00:18:12.320 --> 00:18:15.320
This is what it looks like in the notebook.

00:18:15.320 --> 00:18:28.320
Again, not nicely visible, but what you can see is you add a node as an object to the to the concordance and we specify the two algorithms order.

00:18:28.320 --> 00:18:38.320
So the ranking and the select and you can see we have a number of parameters that you can flexibly choose from and apply to your needs.

00:18:39.320 --> 00:18:41.320
Next, you might be interested.

00:18:41.320 --> 00:18:47.320
So if you people have already picked up on this, right, we have some things happening right before hands.

00:18:47.320 --> 00:18:49.320
So what we can call the left context.

00:18:49.320 --> 00:19:01.320
So basically, one way to explore this is to generate trigrams, something, something hands and then arrange them by frequency.

00:19:01.320 --> 00:19:10.320
And this is also something implemented in flexicon directly, which we call a partition algorithm where we can do this.

00:19:10.320 --> 00:19:14.320
Where we can group lines by N gram patterns.

00:19:14.320 --> 00:19:19.320
So in this case, trigrams and then automatically sort the output by frequency.

00:19:19.320 --> 00:19:23.320
Again, this is not something we invented from scratch.

00:19:23.320 --> 00:19:35.320
So anyone familiar with and conch might have come across this, maybe even unknowingly, because in and conch, this is just something that is by default applied in certain settings.

00:19:35.320 --> 00:19:42.320
However, in this in this implementation flexicon, we're flexible regarding the N gram size.

00:19:42.320 --> 00:19:48.320
So we could choose to look at five grams or any number of combinations that make sense for our context.

00:19:48.320 --> 00:19:55.320
And we have this as an explicit algorithm that we can control rather than kind of this implicit sorting operation.

00:19:57.320 --> 00:19:59.320
So what do we see there?

00:19:59.320 --> 00:20:07.320
Well, the most frequent pattern at 32 times in this concordance is with his hands.

00:20:07.320 --> 00:20:10.320
This is the most frequent trigram by far.

00:20:10.320 --> 00:20:13.320
And it's almost exclusively followed by in his pockets.

00:20:13.320 --> 00:20:23.320
So those of you who listen carefully in the very first talk, they might be experiencing some recognition, which, of course, is entirely coincidental.

00:20:24.320 --> 00:20:27.320
So by the way, we do get some variation here as well.

00:20:27.320 --> 00:20:35.320
So it's not only with his hand in his pockets with his hands behind him is another thing that occurs repeatedly.

00:20:35.320 --> 00:20:39.320
Then you can see the next trigram is significantly less frequent.

00:20:39.320 --> 00:20:45.320
Seven occurrences rubbing his hands, which also seems to be part of a more complex pattern.

00:20:45.320 --> 00:20:52.320
So it's followed usually by a prepositional phrase that specifies some kind of emotion that is happening while he's rubbing his hands.

00:20:52.320 --> 00:20:57.320
So he's rubbing his hands because he's ecstatic or satisfied or whatever.

00:20:57.320 --> 00:21:03.320
And now, as we've noted a few times already, her hands is a lot less common.

00:21:03.320 --> 00:21:10.320
So it's kind of difficult from this to see, well, what are female characters doing with their hands, if anything?

00:21:10.320 --> 00:21:21.320
Right. We had one observation earlier on that, yes, if they are mentioned, they seem to be in relation to a male character,

00:21:21.320 --> 00:21:25.320
which again you might recognize from the first talk.

00:21:25.320 --> 00:21:31.320
But of course, we can we can apply a dedicated strategy again to actually focus in on those occurrences.

00:21:31.320 --> 00:21:39.320
And we find confirmed, yes, her hands tend to be upon his arm, upon his shoulders, on his knee and all that kind of stuff.

00:21:40.320 --> 00:21:51.320
And as the final, somewhat more complex example, we were interested in how our hands used in relation to other body parts,

00:21:51.320 --> 00:22:00.320
because quite often it's not only hands, but several several parts of the body are mentioned in this context.

00:22:00.320 --> 00:22:08.320
So we did this by applying basically the same kind of algorithm that we used earlier on for the collocates.

00:22:08.320 --> 00:22:12.320
So and here's her him rubbing stuff.

00:22:12.320 --> 00:22:20.320
But instead, we passed this list of well, body parts, body part nouns.

00:22:20.320 --> 00:22:32.320
And because FlexiCong allows for this flexible setting of parameters, we could easily specify that, well, in this particular instance,

00:22:32.320 --> 00:22:34.320
we want a larger context window.

00:22:34.320 --> 00:22:42.320
So we want to allow for the body part nouns to occur at a larger distance from the word hands, because we assumed, well,

00:22:42.320 --> 00:22:50.320
you know, people people might be engaging in some kind of more elaborate description that just uses up more words.

00:22:50.320 --> 00:22:56.320
So we set the window size to 10 tokens to the left and to the right rather than five as before.

00:22:56.320 --> 00:23:00.320
And this could just easily be done by replacing the number.

00:23:00.320 --> 00:23:09.320
We also decided to count on tokens rather than types so that each occurrence of say finger would be counted separately.

00:23:09.320 --> 00:23:19.320
And the nice thing and aspect that we haven't really mentioned before is that because the analysis tree tracks all of these steps that we previously applied,

00:23:19.320 --> 00:23:25.320
we can also navigate the tree quite easily to then apply algorithms to some other position.

00:23:25.320 --> 00:23:31.320
And this is really something, you know, if you've kind of intuitively gone through all of the concordances,

00:23:32.320 --> 00:23:42.320
it would be very difficult to go back to a specific step in your analysis if you just did this by clicking around as would be the case in most corpus tools.

00:23:42.320 --> 00:23:51.320
So what we can do with this is to apply this ranking algorithm to two different parts of the analysis tree that we have so far.

00:23:51.320 --> 00:23:56.320
So in the first scenario, we applied it to the subset that we started.

00:23:56.320 --> 00:24:06.320
Well, that we started to talk with. So basically all of the lines that also contain at least four instances of and her him and so on.

00:24:06.320 --> 00:24:13.320
So in flexi contours, this would be a rank and select followed by another ranking algorithm.

00:24:13.320 --> 00:24:19.320
And what we tend to get here are examples like the one you see here.

00:24:20.320 --> 00:24:29.320
So basically we get kind of interpersonal relations that are quite intense between two characters, usually male and female,

00:24:29.320 --> 00:24:38.320
kind of doing stuff in this case, either having her hands on his arm again and shaking her head.

00:24:38.320 --> 00:24:45.320
So her body parts are really used as accompanying this this kind of intense situation.

00:24:46.320 --> 00:24:53.320
Whereas if we apply to the overall context, just all of the mentions of hands outside of quotes,

00:24:53.320 --> 00:25:00.320
well, we get all of the all of the ones above as well, because they're just this more specific subset of the overall concordance.

00:25:00.320 --> 00:25:08.320
But we also get these very elaborate person descriptions that refer to somebody's appearance.

00:25:09.320 --> 00:25:19.320
So in this case, this may be not very gratifying description of someone having large bones, large feet, features, large feet and hands, large eyes and.

00:25:20.320 --> 00:25:28.320
And again, incident coincidentally, we see there seems to be some some other pattern going on here as well.

00:25:28.320 --> 00:25:35.320
So again, this is a bit difficult to see, but just so you get an impression of the overall tree representation.

00:25:35.320 --> 00:25:38.320
So what did we end up with after these steps?

00:25:38.320 --> 00:25:50.320
And the one thing you might be able to see is the indentation that represents well, which which specific node in the tree so far was the next algorithm applied to.

00:25:50.320 --> 00:26:00.320
So that also lets you track quite visually what your what your steps in the process are, specifically the two ranking algorithms that I showed you on the previous slide.

00:26:01.320 --> 00:26:12.320
So number 10 is the first one to the board restricted context, and so it's invented further to the right, whereas note 11 was applied to the original query of hands.

00:26:14.320 --> 00:26:20.320
OK, so we have a few conclusions slides on a content based note.

00:26:20.320 --> 00:26:27.320
We can say, well, the mention hands and mention much more frequently for male characters and female characters.

00:26:27.320 --> 00:26:28.320
This is also true for other body parts.

00:26:28.320 --> 00:26:32.320
I think it's just a feature of men being mentioned more than women.

00:26:33.320 --> 00:26:39.320
Men's hands tend to be described in relation to the characters themselves, so that behind him or in his pockets,

00:26:40.320 --> 00:26:55.320
whereas women's hands are again described as being positioned in relation to men upon his arm or covering her face in some kind of or also clasping her hands in some kind of intense anxious mode.

00:26:55.320 --> 00:27:14.320
On a grammatical note, we we could kind of see that there were a lot of with and in in constructions accompanying the mention of hands, which we could interpret as hand movements and their positions being something that is described as accompanying something else.

00:27:14.320 --> 00:27:17.320
So he's doing something with his hands in his pockets.

00:27:17.320 --> 00:27:19.320
It's not really about the hands.

00:27:19.320 --> 00:27:33.320
The hands are just underlining whatever else is happening in that moment, which is often speeches, speech, emotions, very explicit showing and telling of emotions in a way.

00:27:34.320 --> 00:27:36.320
And then the physical descriptions.

00:27:38.320 --> 00:27:41.320
And a couple of concluding remarks on Plexiconc.

00:27:41.320 --> 00:27:44.320
So it might seem a bit tedious.

00:27:44.320 --> 00:27:49.320
We know we have seen these long examples, these long queries.

00:27:49.320 --> 00:28:00.320
But it is important that using our our tool for research documentation allows us to lay out all research steps very, very clearly.

00:28:00.320 --> 00:28:06.320
And it is very easy to reproduce a study with other queries or other corpora.

00:28:07.320 --> 00:28:16.320
So we can simply replace hands with eyes in our notebook and you will get a similar study, which would be absolutely impossible otherwise.

00:28:16.320 --> 00:28:23.320
And of course, Plexiconc can be incorporated into what you see is what you get concordancing tools to make it a bit more user friendly.

00:28:23.320 --> 00:28:31.320
And the most important thing about Plexiconc are the algorithms, which are very, very flexible.

00:28:31.320 --> 00:28:40.320
So we have shown that our that our quick quick grouper is much more flexible than the quick grouper and click.

00:28:40.320 --> 00:28:46.320
We have shown that our quick patterns are much more flexible than quick patterns and and conch.

00:28:46.320 --> 00:28:56.320
And actually, this is something that we find really important that because setting the parameters might be very, very useful for research purposes.

00:28:56.320 --> 00:28:59.320
And of course, we are open to implementing new algorithms.

00:28:59.320 --> 00:29:10.320
So your suggestions about concordance reading algorithms that are to be that need to be implemented in Plexiconc are also very, very welcome.

00:29:10.320 --> 00:29:13.320
Well, that's basically it.

00:29:13.320 --> 00:29:15.320
And here is one more concordance.

00:29:15.320 --> 00:29:17.320
Thank you.

00:29:29.320 --> 00:29:30.320
Oh.

