[00:00:31] Speaker 04: May it please the court, Gotham Putnick with John Damaris on behalf of IBM. [00:00:38] Speaker 04: Both patents at issue in this appeal are directed to better capturing and utilizing user context variables in a search system to rank and prioritize search results provided to a user. [00:00:51] Speaker 04: User context is information about the user, including information such as the user's demographics, background, as well as the user's prior searches and selected results in that search system. [00:01:03] Speaker 04: User context variables are mapped to the search results to better weigh and rank them so that the most relevant results are provided to the user. [00:01:12] Speaker 04: This differs from what was in the prior art, which either didn't use context variables or information or was only able to use a limited amount of user context information. [00:01:21] Speaker 01: Does the patent describe how those determinations have made, how it aligns in order, what the most useful, most relevant? [00:01:33] Speaker 04: There's an order and annotation. [00:01:36] Speaker 01: I know that's what the claims say in order and annotation. [00:01:38] Speaker 01: Do we know what that means or how that's done? [00:01:40] Speaker 04: Well, I think the scope of the patent is more on the acquiring of the user context information and then applying it to that algorithm, the order and annotation algorithm. [00:01:49] Speaker 04: The scope of the claims is not the order and annotation algorithm itself, or the focus of the claims. [00:01:55] Speaker 00: But is it the claim language in that particular claim? [00:02:00] Speaker 00: I guess we're talking about the 676 pattern. [00:02:03] Speaker 00: Is it result-oriented in its language, or is it a specific implementation? [00:02:09] Speaker 00: And if it is a specific implementation, what exactly in the claim language would compel you to say so? [00:02:19] Speaker 04: 676, claim 14. [00:02:21] Speaker 04: I just want to make sure we're talking about the same thing. [00:02:24] Speaker 04: Sure. [00:02:24] Speaker 04: In that claim language, it is not results-oriented in the sense that it is focusing on a user context vector and what is in the vector and how is it applied to the order and annotation [00:02:38] Speaker 04: What is a vector? [00:02:40] Speaker 04: Sure. [00:02:41] Speaker 04: The vector is a data structure that's claimed in the patent and it has specific information. [00:02:48] Speaker 04: It has the user context that's been acquired in past searches as well as the current search. [00:02:52] Speaker 04: It also has the user's preferences and attributes that they want to focus on. [00:02:57] Speaker 04: For instance, if it's a business person who's over 50 and has certain preferences on the types of hotels they stay in, that could be user context attributes that are captured in that vector. [00:03:08] Speaker 04: The important part here, and the patent talks about the use of the vector to help with machine learning, and the reason the vector becomes important in the structure of it, and it's specific per our claim construction in the lower court, was that there's a certain amount of, it's the n-dimensional, meaning the number of attributes that we want to capture, and there's heterogeneous data. [00:03:31] Speaker 04: What does that mean? [00:03:32] Speaker 04: That means that all these attributes are disparate types of information. [00:03:36] Speaker 04: What the vector does, and the importance of it in this patent, is a vector makes all that information homogenous. [00:03:43] Speaker 04: So it can then be used and applied to the algorithm. [00:03:46] Speaker 00: What do you mean by it makes it homogenous? [00:03:49] Speaker 04: Meaning it's the same type of information for machine learning. [00:03:52] Speaker 04: It's the conversion of disparate information into data that can be used for the machine learning. [00:03:59] Speaker 01: So there's no limitation. [00:04:01] Speaker 01: I mean, this isn't like the Weisner case where you're just talking about travel. [00:04:04] Speaker 01: Use an example. [00:04:06] Speaker 01: of what the user might do. [00:04:08] Speaker 01: But the user context vector includes just any search information regarding a particular user that they've ever used on anything. [00:04:17] Speaker 01: And then when they do a particular search to buy a car, it sorts out the car related stuff and ranks it. [00:04:25] Speaker 04: A little different, your honor. [00:04:27] Speaker 04: If you look at the 193 pattern, for instance. [00:04:29] Speaker 01: Well, I'd like to, but if we can focus on the 676. [00:04:32] Speaker 04: Sure. [00:04:33] Speaker 04: It's whatever the user selects as the attributes that it cares about are the ones that will be focused in the vector, will be included in the vector. [00:04:41] Speaker 04: So the search deals with a car. [00:04:43] Speaker 01: But it deals with everything. [00:04:45] Speaker 04: Anything and everything users have to search for. [00:04:48] Speaker 04: within the specific self-service system. [00:04:50] Speaker 04: The claims are dealing with a specific self-service system. [00:04:53] Speaker 04: So if you think about like a rental agency, if you want to go to a rental agency and you want to pick what kind of car you like, you can input that information into the system and it'll generate options, better options based on what your preferences are. [00:05:10] Speaker 01: I'm still not clear on what limits, if any, exist in terms of the kind of information that's in the user context vector. [00:05:17] Speaker 01: But more importantly for me is, Judge Stowe, I think, used the words, how do you do it? [00:05:23] Speaker 01: How is this the information is sorted out? [00:05:27] Speaker 01: I mean, what is this? [00:05:28] Speaker 01: This invention is you then have this information in the user context vector, and then based on the search that's done by the user, [00:05:37] Speaker 01: something happens to prioritize all of the historical information you have in the user contact vector? [00:05:43] Speaker 04: A little different, Your Honor. [00:05:47] Speaker 04: What you did, what you searched before, that data will go into the vector, so it will become a relevant data point for the next analysis. [00:05:55] Speaker 04: And the next analysis is using your past searches in part, as well as your current search, figure out what would be most apt for you. [00:06:02] Speaker 01: And how does it do that? [00:06:03] Speaker 01: Do we have any information in the spec about how [00:06:06] Speaker 01: that step is done. [00:06:07] Speaker 04: Well, that is the order and annotation algorithm, which this is more about how do you get the user context variables and then the use of it and the application of it is the algorithm in the order and annotation algorithm. [00:06:22] Speaker 00: But the order and annotation algorithm is claimed, right? [00:06:26] Speaker 04: It's part of the claim, but the focus of the claim is the use of the user context vector. [00:06:31] Speaker 00: And what does it mean exactly when you say mapping user context vector onto the response set? [00:06:40] Speaker 04: The search results. [00:06:41] Speaker 04: So if you have the user context vector and you have the data that you've homogenized, meaning you can use it. [00:06:47] Speaker 00: Is mapping a computer science construct? [00:06:50] Speaker 00: And what does it mean? [00:06:51] Speaker 00: It does. [00:06:52] Speaker 04: It means comparing the search results that you have and using the user context vector [00:06:58] Speaker 04: Then to figure out which ones are most relevant. [00:07:01] Speaker 03: I'm sorry when you keep saying user context vector you mean what? [00:07:06] Speaker 04: Information about the user in a specific format your honor. [00:07:09] Speaker 04: It's what's claimed in the it's what's claimed and and put in the spec Sure, I can give you a site [00:07:36] Speaker 04: So if you look at appendix 106, column 19, lines 35 to 62. [00:08:26] Speaker 04: I'm sorry, columns 35 to 62. [00:08:29] Speaker 04: Column 19, lines 35 to 62. [00:08:32] Speaker 00: Where does it talk about that it's a particular data structure, for example? [00:08:40] Speaker 00: There's a lot of words here. [00:08:43] Speaker 04: Yeah, this is the aspect of it being predictive. [00:08:49] Speaker 04: Why don't we go to [00:08:58] Speaker 04: Why don't we go to Appendix 99? [00:09:02] Speaker 04: Or we'll say Appendix 106, column 19, lines 4 through 12. [00:09:10] Speaker 01: 4 through 12. [00:09:13] Speaker 04: I think that's a better site. [00:09:18] Speaker 00: And this here is kind of consistent with the proposed construction, right? [00:09:24] Speaker 03: That's correct. [00:09:28] Speaker 01: So the answer to Judge Hughes' question is from the combination of user context and previous actions within the system to map specific context. [00:09:42] Speaker 03: How does that talk about a specific data structure? [00:09:46] Speaker 03: Isn't that just saying what I said is you use information about the user and their past searches to refine new searches? [00:09:55] Speaker 03: Here's my problem. [00:09:57] Speaker 03: I mean, I'll lay it on the table. [00:09:59] Speaker 03: To me, what you're claiming in clause 14 is exactly that, that you use information about the user, about their past searches, about their demographics, and use it to generate better results in searchers. [00:10:14] Speaker 03: If that's what this patent is about, that's an abstract idea to me, and it's not eligible. [00:10:19] Speaker 03: So what it needs to do is show how that's done in a specific way [00:10:24] Speaker 03: the specific data structures or some specific invention, rather than just this more general idea of using these kinds of information to get better search results. [00:10:36] Speaker 03: Because we know from our past cases that collecting, analyzing, classifying, and displaying information in that kind of very broad claimed language, functional language, is not eligible. [00:10:50] Speaker 03: I mean, we have dozens of cases that say that. [00:10:53] Speaker 03: So what we're asking you, and I get it, there's a lot of language in here that feels like these are things that maybe are actual inventions and there's lexicography, but when we ask you about them, you're pointing us back to very general language that doesn't say anything. [00:11:10] Speaker 04: If I could turn you to the complaint, Your Honor. [00:11:12] Speaker 03: I don't want to look at the complaint. [00:11:14] Speaker 03: You can do it if you want. [00:11:15] Speaker 03: I want to see it in the patent. [00:11:17] Speaker 03: Because the patent is where you claim your invention. [00:11:21] Speaker 03: And that's the first source. [00:11:22] Speaker 03: I know we have all this other stuff about what you can claim to get over a 12b6 motion and the like. [00:11:29] Speaker 03: But I want to see what the patent says your invention is. [00:11:34] Speaker 04: Yeah, I think the best place is in the claim, Your Honor, because it is saying it's not simply just taking information and moving it around. [00:11:41] Speaker 04: It is specifically taking this information that it got in a way that it wasn't available before, which is the... Well, you say that. [00:11:49] Speaker 03: You say that the user contact vector is a new form of the information. [00:11:55] Speaker 03: but then you don't say how you do that or what it means. [00:11:58] Speaker 03: I get it. [00:11:59] Speaker 03: If user contact vector was a kind of coin term or it's your invention and you said, here's what a user contact vector is. [00:12:09] Speaker 03: Here is how you put that together. [00:12:11] Speaker 03: Here's the structure of the data. [00:12:13] Speaker 03: Like the FinGen case where it's a security tag or something and it says, here's the way that data structure works to increase security. [00:12:22] Speaker 03: That was a Pat eligible idea. [00:12:24] Speaker 03: Where is that correspondence to user context vector in the specification? [00:12:31] Speaker 04: I still think 19, the spec that I showed you, I still think that's the best language because it tells you not only what's in it, but how it was obtained. [00:12:40] Speaker 01: So tell us, you were talking about lines five through something or other? [00:12:44] Speaker 04: Correct, five through [00:12:49] Speaker 04: Five through 13. [00:12:51] Speaker 01: How does this explain? [00:12:57] Speaker 03: I mean, it says, imprints and conclusions are made regarding both the individual's user's preferred resource characteristics, and those of a common set of users. [00:13:07] Speaker 03: How? [00:13:10] Speaker 04: It's acquired through the system if you look at figure one of the patent. [00:13:16] Speaker 04: What page is that on? [00:13:17] Speaker 04: That is on appendix 90. [00:13:26] Speaker 04: If you're looking at appendix 90, the user context vector is, if you go from box 12 to the right, is where it first gets populated. [00:13:36] Speaker 04: And it also gets populated by user interaction records that are on the far right, 15. [00:13:42] Speaker 04: So there's different inputs going into the user context vector. [00:13:46] Speaker 04: And those are subsequently processed by the system overall. [00:13:52] Speaker 00: And the user interaction records, as I understand it, those are prior searches? [00:13:59] Speaker 00: Prior and current searches. [00:14:02] Speaker 00: Are those like, does the patent tell you what those are more than prior searches? [00:14:07] Speaker 00: Does it say that those are where the system can see that a user likes particular search results, or what sort of detail does it give on that? [00:14:17] Speaker 00: I don't know if that's necessary, but I'm just curious. [00:14:20] Speaker 04: So like, claim 17 talks about how it's, the combination of claim 14 and claim 17, how it's populated with each interaction with the system. [00:14:29] Speaker 04: Every search and the results go into that piece or that part of the spec. [00:14:37] Speaker 04: And it's done on each query, so it's adaptive. [00:14:41] Speaker 04: It's done each time. [00:14:42] Speaker 04: I'm looking at the last limitation. [00:14:44] Speaker 01: So it prioritizes the records? [00:14:47] Speaker 01: It prioritizes the prior searches? [00:14:49] Speaker 04: It includes them. [00:14:50] Speaker 04: It's not where it's prioritized. [00:14:52] Speaker 04: It's prioritized later. [00:14:53] Speaker 01: So this is just all with someone's searches and then using [00:14:57] Speaker 01: But you're saying the patent does more than just taking all of the searches, and I don't know, doing what with them. [00:15:04] Speaker 01: Is there some way in which it prioritizes it to make it more useful to the user, right? [00:15:10] Speaker 04: That's the algorithm, Your Honor. [00:15:12] Speaker 04: That's the order and annotation algorithm that does the actual prioritizing. [00:15:17] Speaker 04: But that's not what we're talking about in Claim 14. [00:15:20] Speaker 04: That's the result of what it's mapped to in Claim 14. [00:15:23] Speaker 00: What are the things? [00:15:24] Speaker 00: There's different things that are [00:15:28] Speaker 00: touted as being technological improvements, either in the patent itself or any better declaration or the complaint. [00:15:37] Speaker 00: What claim limitations in claim 14 do you think results in those? [00:15:43] Speaker 00: For example, the adaptive learning. [00:15:45] Speaker 00: And how would you, could you walk through how claim limitation results in adaptive learning, for example? [00:15:53] Speaker 04: Sure. [00:15:53] Speaker 04: And again, this goes back to the vector itself. [00:15:56] Speaker 04: But if you look at appendix [00:15:58] Speaker 04: I showed you 106, the predictive aspect, column 19, lines 35 to 62. [00:16:05] Speaker 04: Then you can go to the complaint itself. [00:16:08] Speaker 00: Let me make sure I ask the question. [00:16:10] Speaker 00: I think I've read all of that, and I know what the claim technological advantages are. [00:16:15] Speaker 00: What I'm wondering about is, is there a one-to-one correspondence between the claim and achieving those claimed [00:16:22] Speaker 00: technological advantages. [00:16:25] Speaker 00: So looking at Claim 14, what language in Claim 14 results in, for example, adaptive learning by the computer? [00:16:33] Speaker 04: We have to connect the dots with the complaint and the inventor declaration to get to the benefits come from the use of the vector, because you're trying to get to the predictive aspect. [00:16:44] Speaker 04: And the benefits come from the vector itself. [00:16:47] Speaker 00: Is it the idea in the receiving user context vector? [00:16:53] Speaker 00: Is it the idea that, let's see, not the idea, but what in the claim makes it so there's adaptive learning? [00:17:04] Speaker 00: Step B or step C or both of them? [00:17:07] Speaker 04: It's the ability. [00:17:08] Speaker 04: It's actually what's in the inventor declaration, Your Honor, that specifies that this is where they thought of changing heterogeneous information into homogeneous information to use for machine learning, which gets to the predictive. [00:17:22] Speaker 00: One of the things I see the district part saying is that those kinds of advantages aren't captured by the claim. [00:17:29] Speaker 00: I think he says that on page 30 of his opinion has that idea. [00:17:33] Speaker 00: So what language should I be looking at to determine whether that's true or not? [00:17:42] Speaker 04: We would argue, Your Honor, that cooperative entertainment tells you the benefits [00:17:46] Speaker 04: of a claim can be part of it. [00:17:49] Speaker 04: If you can show the benefits that are plausibly tethered to the claim language itself, that those benefits stem from it. [00:17:57] Speaker 04: And the inventor declaration and the complaint support the use of the vector to convert heterogeneous information to homogeneous information. [00:18:06] Speaker 00: What about claim 17, where it says the annotation function being adaptable based on history of user interactions? [00:18:12] Speaker 00: as provided into a database of user interaction records. [00:18:16] Speaker 00: Is that something being relied on for adaptive learning? [00:18:19] Speaker 04: That's the part we talked about earlier in figure one, your honor. [00:18:24] Speaker 04: One of the inputs that gets fed into the vector itself is the current user interaction. [00:18:39] Speaker 01: We'll wait beyond time. [00:18:40] Speaker 01: We will restore some of it by the time. [00:18:42] Speaker 01: Thank you. [00:19:05] Speaker 02: Good morning. [00:19:06] Speaker 02: May it please the court, Steve Siegel, on behalf of defendants in the Apolyse Zillow Group Incorporated and Zillow Incorporated. [00:19:14] Speaker 02: I'd like to begin with [00:19:15] Speaker 02: several of the points that my colleague was making. [00:19:18] Speaker 02: The first is the question of specific implementation of the user context vector in claim 14. [00:19:25] Speaker 02: And I want to point out that the vector is just a data structure that has existed in mathematics. [00:19:33] Speaker 02: The concept of a vector is not something that IBM has claimed it has invented. [00:19:38] Speaker 02: We pointed this out in our red brief at page 25, or excuse me, at page 21, I think, and IBM didn't contest that. [00:19:45] Speaker 02: It doesn't argue that vectors are unique to computer science, that they can only be applied in the context of computer applications. [00:19:53] Speaker 02: It doesn't even dispute that n-dimensional context vectors existed in the prior art that were conventional. [00:19:59] Speaker 00: They're instead saying that what's inventive is the way they use the vector, right? [00:20:04] Speaker 00: They're not claiming to have invented vectors, of course. [00:20:07] Speaker 00: But they're saying that the way they use the vector to improve the search results, that that's what their inventive concept is, right? [00:20:16] Speaker 02: To the extent that the way they use the vector is specific to the content of the vector, I think that's true. [00:20:22] Speaker 02: What IBM said at argument today and what it's repeated in its brief is that user context is information about the users. [00:20:29] Speaker 02: And so what IBM claims is novel about the vector is that it is, in their words, [00:20:34] Speaker 02: creating a homogenized set of heterogeneous variables into one data set that is specific to user interaction data or something about the user specifically. [00:20:45] Speaker 02: And I think this chord's precedent, particularly in the electric power group case, is [00:20:50] Speaker 02: very clear that the content of the data doesn't make much of a difference at all to the abstract idea inquiry under step one. [00:20:59] Speaker 02: And it certainly doesn't help resolve the question of whether there's an inventive concept under step two. [00:21:04] Speaker 00: What if using a vector, for example, with particular content and it results in an improved system? [00:21:10] Speaker 00: You're saying that will never be considered eligible? [00:21:16] Speaker 02: It's certainly not. [00:21:17] Speaker 02: No, that's not the case at all. [00:21:18] Speaker 02: I think there certainly would be applications when the use of a very particular, very specific data structure that had improvements to computer functionality, where those improvements were reflected in the claims, would likely be a candidate for eligibility under Section 101. [00:21:34] Speaker 00: What if I were to say an improvement to computer functionality is improving its search and capability in an internet context where there are so many [00:21:45] Speaker 00: it reduces the volume of search results that a person might get. [00:21:51] Speaker 02: Again, I think it would certainly depend on the facts of the case and specifically the claim language at issue. [00:21:56] Speaker 02: One of the concerns that this court has repeated throughout its Section 101 jurisprudence is the concern about black box functional results-oriented claiming, where a result is specified, which may be better search results for the user. [00:22:13] Speaker 02: But that in and of itself doesn't provide a sufficient concrete advance at ALICE step two to save an otherwise abstract idea from ineligibility under section 101. [00:22:26] Speaker 00: Why not? [00:22:27] Speaker 00: Why not? [00:22:27] Speaker 00: In the context of this case, why not? [00:22:29] Speaker 00: In the context of- I mean, especially under 12B6, where statements in the complaint and the specification are supposed to be taken as true. [00:22:40] Speaker 02: Statements in the specification and the claim are absolutely supposed to be taken as true, but there's a limit to that, which is that it has to be reflected. [00:22:47] Speaker 02: Those innovations, those inventions have to be reflected in the claim language itself. [00:22:53] Speaker 00: Why isn't the idea of adaptive learning, although at a basic level, captured by the claim when it talks about mapping [00:23:05] Speaker 00: this user context vector, which as they say it should be construed, and as you've agreed it could be construed, that includes prior user interactions with the searching system. [00:23:18] Speaker 02: The claims are very specific in what they require. [00:23:22] Speaker 02: In the context of Claim 14, what the claims say is receiving a user context vector, and the specification confirms [00:23:32] Speaker 02: that this step of receiving a user context vector is the point at which the claims begin. [00:23:38] Speaker 02: There are a number of pre-processing steps that occur, and that the specification refers to expressly as pre-processing steps. [00:23:45] Speaker 02: So for instance, if you go to appendix 99, column 5, lines 34 through 40, this is where the specification explains that the system 10 [00:23:56] Speaker 02: performs, quote, several pre-processing steps, including one, creating an empty user context vector, and then dot, dot, dot, populating the context vector with minimal information from external data elements 11. [00:24:09] Speaker 02: This continues at column 6, line 61 to 63, [00:24:14] Speaker 02: where it explains with reference to this specific embodiment in the specification, which is the response set ordering invitations of processing. [00:24:21] Speaker 00: Maybe my question wasn't well said, but could you look at the claim for me? [00:24:26] Speaker 00: Because I'm just going to focus on the claim right now. [00:24:28] Speaker 00: You can go to the spec if it answers, but I'm afraid that you've gotten a little far afield from what I was asking. [00:24:34] Speaker 00: And my question is, why doesn't the claim as written capture the idea of [00:24:39] Speaker 00: modifying the search results, modifying the order of the search results and how things are annotated based on prior search results. [00:24:50] Speaker 00: Does it capture that? [00:24:52] Speaker 02: I don't think IBM has made any cogent argument for how the claims, at least in claim 14, capture incorporating the results of the prior search result into the current search result. [00:25:04] Speaker 00: Well, what if the, let me try that. [00:25:07] Speaker 00: I hear what you're saying. [00:25:08] Speaker 00: Now, we agree, you agree, user context vector includes as defined for purposes of the 12b6 motion. [00:25:17] Speaker 00: It includes user context vector is going to have prior search results, a history of user interactions, I think. [00:25:24] Speaker 00: OK? [00:25:25] Speaker 00: Correct. [00:25:26] Speaker 00: And then when it says applying an ordered in step C, I guess, [00:25:32] Speaker 00: mapping the user context vector with the resource response set to generate an annotated response set having one or more annotations. [00:25:42] Speaker 00: And then it says controlling the presentation according to the annotations, right? [00:25:48] Speaker 00: I understand that's broad, but why doesn't that cover that you're using prior searches in order to change how those [00:26:00] Speaker 00: search results are provided to the user. [00:26:04] Speaker 02: So two responses. [00:26:06] Speaker 02: One, I don't think that IBM has made a cogent argument for how that captures... That's fine, but we've got to make... This is de novo. [00:26:13] Speaker 00: I want to know what you think of my question. [00:26:15] Speaker 02: I think that it doesn't really matter for purposes of the Section 101 analysis in the end, because even if we were to accept that the claims somewhere recited that, capturing the prior search results and reusing them [00:26:28] Speaker 02: in your next search to improve the ordering and presentation of search results. [00:26:33] Speaker 02: That still is presenting an idea. [00:26:36] Speaker 02: It is a claim to an idea itself. [00:26:38] Speaker 02: And so the next question that Alice had too becomes, what within the claims is there that is significantly more than the idea itself? [00:26:46] Speaker 02: And that's where this example I think falters most heavily because there is nothing within the claims other than the idea of capturing prior user interaction and then using it [00:26:57] Speaker 02: in another round of searching to improve the search results. [00:27:01] Speaker 02: That's the extent of what, according to IBM's interpretation of the claims, that's all it says. [00:27:05] Speaker 02: It doesn't say anything more than that. [00:27:07] Speaker 02: And I think this court's precedent has been very clear that to save the claims at step two, there must be a specific and concrete improvement that is reflected in the claims themselves, something that is tangible as in the Bascom case, as in the Data Engine case. [00:27:21] Speaker 00: What do you think makes this not tangible? [00:27:24] Speaker 00: I mean, if it is, just going from the point of view that if searching technology is a technological field, then why isn't improving search results a technological innovation, even though it's software, even though it's methods? [00:27:45] Speaker 02: The general idea of improving search results is so broad that anything would fall within its scope. [00:27:52] Speaker 02: And so I think the court's underlying concern about preemption, which generally has not itself become a predominant focus of the 101 inquiry, is the motivating force behind why we require a specific and very concrete improvement that is reflected in the claims themselves. [00:28:10] Speaker 02: And this is where I keep stumbling. [00:28:13] Speaker 02: I apologize, Your Honor, but I can't. [00:28:14] Speaker 00: How do you distinguish wiser? [00:28:18] Speaker 02: The wiser case, and I think if you look carefully at the step two analysis in that case, the court was very clear that. [00:28:28] Speaker 02: The specificity as to the mechanism through which they achieve improved search results, and I'm quoting from the opinion, through a location relationship with a reference individual or through the location history of the individual member who is running the search in a targeted geographic area is sufficient, and that was sufficient in that case. [00:28:49] Speaker 02: But it was a very, very concrete and technical improvement to searching that was actually reflected in the claims themselves. [00:28:56] Speaker 02: And here we have the complete absence of that. [00:28:59] Speaker 02: All we have is a generic ordering and annotation mechanism that is described as a function. [00:29:04] Speaker 02: We don't know what it is. [00:29:05] Speaker 02: We have no idea as to its contents. [00:29:07] Speaker 02: We don't even know what the specific results are supposed to be. [00:29:10] Speaker 02: All we know is that it is applied as against [00:29:14] Speaker 02: a set of search results on one hand, and a user contact specter on the other. [00:29:18] Speaker 00: So what I hear you to be saying is that if the claims were directed to a more specific use, like identifying how the prior search results are used, then that could be eligible. [00:29:32] Speaker 02: That's absolutely a different case than the one that we have before us today, Your Honor. [00:29:45] Speaker 02: I think the only other point that I wanted to address was the question of homogenizing data. [00:29:53] Speaker 02: And I think I already addressed this somewhat, but the specification makes very clear that that process is just received as an input to claim 14. [00:30:02] Speaker 02: So there is no process by which the claims recite homogenizing user context data. [00:30:07] Speaker 02: It is simply received as an input as part of the claim 14 process and nothing further. [00:30:14] Speaker 02: If the court has no further questions, I'll cede the rest of my time. [00:30:18] Speaker 01: Thank you. [00:30:29] Speaker 01: I think you went over, but we'll restore three minutes of rebuttal if you need it. [00:30:33] Speaker 04: Thank you. [00:30:33] Speaker 04: I just wanted to address your honor's point on the interactive aspect, or each time that each query is used as part of the user context vector. [00:30:45] Speaker 04: I think you can look at the claim itself, and it's the last limitation of the claim, which is controlling the presentation of the resource response set to the user according to said annotations, wherein the ordering and annotation function is executed interactively at the time of each user query. [00:31:02] Speaker 04: So this process is being done with each query. [00:31:07] Speaker 04: So it goes into the system, and it goes into the vector, and then it gets used upon each use. [00:31:13] Speaker 04: And then you see that again in claim 17, where the last limitation says, generating the ordering and annotation function, set annotation function being adaptable based on a history of the interactions as provided in said database of user interaction records. [00:31:29] Speaker 04: So they work together in tandem where it's being fed each time and it's being used as part of the vector. [00:31:34] Speaker 04: as well as being applied to the algorithm itself. [00:31:39] Speaker 04: I did want to emphasize that IBM is not claiming that it patented or claimed a specific order and annotation algorithm in this patent. [00:31:47] Speaker 04: What it's claiming is that it's defining what inputs it's using to get information that wasn't previously used in the prior art to actually improve the process of finding better search results. [00:32:00] Speaker 00: What about the argument that it's been about the [00:32:03] Speaker 00: OK, that it's so broad since you don't know exactly what data it's relying on. [00:32:09] Speaker 04: It's broad in the sense of what data could be utilized for it. [00:32:14] Speaker 04: It's not broad in what you do with it. [00:32:16] Speaker 04: It's pretty specific in what you do with it. [00:32:18] Speaker 04: You put it in the vector, and then you apply it to the algorithm [00:32:24] Speaker 04: But how it's actually obtained, the specification tells you where the information comes from. [00:32:29] Speaker 04: Some of the specification sites we already went over, it tells you what the data is and how it's acquired. [00:32:36] Speaker 04: And then if you go to the 193 patent, it gives you another indication of how it's acquired for that patent with the use of the three workspaces. [00:32:45] Speaker 04: So it's not claiming everything. [00:32:46] Speaker 04: It's claiming the specific implementations that are in these two patents. [00:32:53] Speaker 04: as far as uh... it may speak to to the one ninety-three pat