[00:00:25] Speaker 01: Our next case is In-Depth Test versus Maxim Integrated Products and Vichay Intertechnology, 2019, 1409, and 1410. [00:00:36] Speaker 01: Mr. Greenspoon. [00:00:37] Speaker 01: Thank you, Your Honor. [00:00:43] Speaker 04: And good morning, Your Honors. [00:00:44] Speaker 04: May it please the court. [00:00:46] Speaker 04: Once the district court accepted the detailed agreed patent lexicography claim construction for outlier in this case that crystallized patent eligibility for the asserted claims, it was no longer possible to view the claims as simply using a computer as a tool, because under this court's case law, [00:01:09] Speaker 04: The prohibited mere use of a computer as a tool means adding computer functionality to well-known business practices. [00:01:16] Speaker 04: It doesn't satisfy that checkbox. [00:01:19] Speaker 04: Or performing conventional activities on a computer. [00:01:23] Speaker 04: And it doesn't satisfy that checkbox either. [00:01:26] Speaker 04: Instead. [00:01:27] Speaker 01: But regarding conventionality, your claims, which are quite limited, testing a component, generating test data, [00:01:39] Speaker 01: a computer configured to receive the data, identify an outlier. [00:01:44] Speaker 01: And when I look at your specification, every one of these limitations seems to refer to according to any suitable algorithm, any appropriate classification methods. [00:01:59] Speaker 01: Nothing is specialized. [00:02:02] Speaker 01: They're all general. [00:02:03] Speaker 04: I would disagree with you, particularly, Your Honor, with identifying an outlier. [00:02:08] Speaker 04: And that's why I focused immediately on the claim construction, because the claim construction sets forth an inventive application. [00:02:17] Speaker 04: In other words, the claim construction does things never done before in the semiconductor testing arts. [00:02:23] Speaker 04: It was specific analysis of passing semiconductor components, not done before. [00:02:29] Speaker 04: Instead the conventional semiconductor tester was only going to look for pass-fail without any regard for what utility might come about if you look at the passing components and do some extra work with those. [00:02:41] Speaker 03: So your invention is to add to the pass-fail a category of high pass in effect? [00:02:49] Speaker 03: Or adding pass to high pass and fail? [00:02:52] Speaker 04: I want to be precise, it's adding outlier. [00:02:55] Speaker 03: Well, I understand. [00:02:56] Speaker 03: So in between pass and fail is outlier. [00:02:59] Speaker 03: But outlier is really nothing more, is it, than simply saying something that does not fail, but nonetheless is not as high a quality as we're going to insist on for certain purposes. [00:03:10] Speaker 04: Well, Your Honor dismissed that by saying it's merely that, but this was actually a point of... Well, okay, but is that a fair characterization? [00:03:16] Speaker 03: I'll strike the merely. [00:03:17] Speaker 04: It was that and more, because what it permitted when you were identifying outliers, you had two particular improvements in the machinery that were never before realized. [00:03:28] Speaker 04: Improvement number one was you can now do grading of reliability classifications without doing a second battery of tests. [00:03:36] Speaker 04: That was revolutionary. [00:03:38] Speaker 04: Then the second improvement, and this is mentioned in our briefing as well, is that you can use the outlier outcomes for a test and determine if the testing protocol itself is sound. [00:03:49] Speaker 04: And the last column, column 18 of the patent, goes into this in the most detail. [00:03:54] Speaker 04: But what that means is, let's say you have 200 tests on a component or 200 tests across all the components on a wafer. [00:04:03] Speaker 04: You find that test number 100 creates a certain outlier fingerprint. [00:04:08] Speaker 04: Then you find that test number 150 creates the identical outlier fingerprint. [00:04:13] Speaker 04: Well, that's a very strong clue now to the operator that 100 and 150 among the test protocols are redundant, and you can eliminate one. [00:04:21] Speaker 04: And if you look in the background of the invention, there's a lot of discussion about how important it is to limit the amount of testing that has to be done. [00:04:30] Speaker 04: The assembly of the test rig is painstaking. [00:04:32] Speaker 04: That's a word in the patent. [00:04:35] Speaker 04: And the performance of the actual tests take a lot of computing power to do. [00:04:39] Speaker 04: So a huge advantage, what I'm calling advantage number two today, [00:04:43] Speaker 04: to create a system and an invention, a new technology, an improvement to the existing technology that lets you actually compress the testing protocol for future tests. [00:04:54] Speaker 04: So it's not just reliability classifications, it's also creating this possibility that the operator can now improve the test rig itself on an operational basis. [00:05:06] Speaker 04: So how do we know that the outlier limitation improved in existing technology? [00:05:11] Speaker 04: There are three reasons we know that for sure. [00:05:14] Speaker 04: Reason number one, the patent trial and appeal board on six different occasions, at least five occasions with the PTAB, one in re-examination, found that the outlier limitation was not found in the prior art. [00:05:27] Speaker 04: These were adversaries who threw their best at it. [00:05:30] Speaker 04: And instead, all they could come up with were items of prior art that just did the pass fail. [00:05:35] Speaker 04: The second reason we know this is an improvement to an existing technology is the ability to do the [00:05:55] Speaker 04: The greatest detail in the preferred embodiment, it talks about critical, marginal, and good. [00:06:01] Speaker 04: But you can go from there. [00:06:03] Speaker 04: And then the third reason why this improved an existing technology is what I mentioned. [00:06:07] Speaker 04: It permitted the operator to actually improve the testing process itself. [00:06:12] Speaker 04: I can't understate how important that is, or can't overstate. [00:06:19] Speaker 04: The specific improvement in the way the outliers changed semiconductor testing equipment is embodied in the claim construction. [00:06:30] Speaker 04: It improved the functionality of an existing technological process. [00:06:33] Speaker 04: And because of that, it lines up beautifully with prior cases of this court where it found similar things and found patent eligibility. [00:06:43] Speaker 04: Your Honors know the catalog of cases. [00:06:45] Speaker 04: I would mention Infish, Amdocs, FinGen, [00:06:51] Speaker 04: And what's more, there's a specific implementation. [00:06:55] Speaker 04: So when I mentioned the claim construction for outlier, it actually had quite a bit of detail in it. [00:06:59] Speaker 04: Implicit in the claim construction, one had to find a statistical baseline. [00:07:05] Speaker 04: That's the starting point. [00:07:06] Speaker 04: It's not the end point. [00:07:07] Speaker 04: It's the starting point. [00:07:11] Speaker 04: So imagine a scenario where [00:07:13] Speaker 04: Those 200 tests were performed on 2,000 components. [00:07:18] Speaker 04: Now we're going to find the statistical baseline for each component test among the 2,000 samples. [00:07:25] Speaker 04: The second part of identifying an outlier is you eliminate the failures. [00:07:31] Speaker 04: That doesn't use statistics. [00:07:32] Speaker 04: That's just using pre-assigned control limits. [00:07:35] Speaker 04: So you eliminate the failures. [00:07:37] Speaker 04: The failures are strays, but you eliminate them. [00:07:40] Speaker 04: Then the third step is what you have left. [00:07:43] Speaker 04: are what the patent calls outliers. [00:07:46] Speaker 04: And by the way, this is using the term outlier in an unconventional way as well. [00:07:49] Speaker 04: So the claim language is pure patent lexicography. [00:07:55] Speaker 04: Because there was a sort of art known use or connotation for the word outlier, the connotation in the prior art was a failure. [00:08:05] Speaker 04: We're not using the word outlier here in the context of a failure. [00:08:09] Speaker 04: There's a pure claim construction here where it's got to be a stray [00:08:14] Speaker 04: above or below that statistical baseline, but one that does not fail. [00:08:20] Speaker 04: We have a reproduction of Figure 9 in our briefing. [00:08:24] Speaker 04: And it shows white triangles. [00:08:27] Speaker 04: It's a highly stylized figure, of course. [00:08:29] Speaker 04: But it shows that, just to illustrate the principle, the wavy line in the middle is the statistical baseline. [00:08:38] Speaker 04: The black circles that go above the limits, those are the failures. [00:08:42] Speaker 04: And then the white triangles above and below [00:08:44] Speaker 04: statistical threshold are the outliers. [00:08:49] Speaker 04: So on step one of ALIS, KPN, we would submit is an excellent roadmap or contains an excellent roadmap for how to proceed. [00:08:57] Speaker 04: KPN, of course, starts with finding, what is the focus of the claim? [00:09:01] Speaker 04: What is the asserted advance over the prior art? [00:09:03] Speaker 04: In this case, that's certainly identification of outliers. [00:09:06] Speaker 04: I think there's actually agreement between the parties that the, quote unquote, focus of the claim is identification of outliers. [00:09:15] Speaker 04: And then in KPN, it synthesizes a lot of the prior case law by saying you look at what were the goals or the intended results or desired results from the focus of the claim. [00:09:30] Speaker 04: So the goal and the intended or desired result in this case were the things I just mentioned, more generally improving semiconductor testing equipment. [00:09:39] Speaker 01: This isn't what the claims recite. [00:09:57] Speaker 04: Just as an aside, the claim is about as long as the claim in Finjen, the Finjen case. [00:10:02] Speaker 04: But of course, that ended up with a holding of eligibility. [00:10:05] Speaker 04: So I think everyone can agree that the length of the claim by itself, whether it be very long or very short, is not dispositive. [00:10:12] Speaker 01: I mentioned the length. [00:10:12] Speaker 01: I mentioned the limited limitations. [00:10:17] Speaker 04: And what Your Honor mentioned is that the performance of reliability classification, for example, is not a claim limitation. [00:10:24] Speaker 04: I would agree. [00:10:25] Speaker 04: But KPN teaches. [00:10:28] Speaker 04: that the advantage of the claim doesn't have to be recited as a claim limitation for this analysis to proceed. [00:10:35] Speaker 04: So the key point in KPN, and this will put the bow on it, is that as long as the claim limitation is not simply claiming the intended result, that there's something more, there's more detail, there's more specificity in there, then you can pass it as a step [00:10:54] Speaker 04: I see them into my rebuttal time. [00:10:56] Speaker 04: I would simply ask that the court reverse the district court, and I'd like to reserve the rest of my time for rebuttal. [00:11:04] Speaker 01: We will do that, Mr. Greenspoon. [00:11:05] Speaker 01: Mr. Barkhain. [00:11:14] Speaker 00: Good morning. [00:11:14] Speaker 00: May it please the court? [00:11:17] Speaker 00: As Your Honor's questions indicated, these claims are incredibly broad. [00:11:20] Speaker 00: And for each element in the claims, the specification tells us these are conventional components, conventional tester, conventional computer hardware, conventional report, and outliers to be designated in any suitable manner. [00:11:32] Speaker 00: Now, what we hear from in depth today is that the unconventional aspect is the identification of outliers. [00:11:39] Speaker 00: But of course, this court's precedent says that the abstract idea cannot itself supply the unconventional aspect for step two. [00:11:48] Speaker 00: There has to be something other than the abstract idea, because an abstract idea, no matter how novel or non-obvious, is still an abstract idea. [00:11:56] Speaker 00: And when counsel said that these claims line up with the court's precedent, I agree. [00:12:01] Speaker 00: But it doesn't line up with Enfish or Amdocs. [00:12:03] Speaker 00: It lines up with SAP America, electric power, Digitech, content extraction, all cases that involve the organization of data. [00:12:13] Speaker 00: And these claims really are about simply organizing data, [00:12:16] Speaker 00: and selecting a subset of results. [00:12:17] Speaker 00: We can see that from Figure 9, which in depth counsel mentioned. [00:12:21] Speaker 00: That's a visual depiction of what an outlier is. [00:12:24] Speaker 00: And in fact, we can see in there that this is an operation that humans could perform. [00:12:28] Speaker 00: You plot the data. [00:12:29] Speaker 00: You show where they exceed the pass fail, the control limits. [00:12:33] Speaker 00: And then anything in the middle that stands out is an outlier. [00:12:36] Speaker 00: That's all there really is here. [00:12:38] Speaker 00: Now, one thing that's important here is that [00:12:44] Speaker 00: in-depth briefing for the first time we hear a revised claim construction. [00:12:49] Speaker 00: There was a stipulated claim construction below, and the lexicographer's claim construction. [00:12:55] Speaker 02: Where is the stipulated construction found? [00:12:58] Speaker 00: It's in the court's markman ruling, and it is the same construction that was profiled. [00:13:05] Speaker 02: Can you give me a JAP? [00:13:06] Speaker 02: I would like to be looking at it as you talk. [00:14:00] Speaker 03: On page 9 of the blue brief, there is a claim. [00:14:16] Speaker 00: I do have it written out, Your Honor, it's directly from the specification. [00:14:19] Speaker 00: We can find it in the patent. [00:14:20] Speaker 02: That little passage in column 644. [00:14:23] Speaker 00: Yes, it's exactly that. [00:14:26] Speaker 00: It's an outlier. [00:14:27] Speaker 00: It's a test result. [00:14:30] Speaker 00: It's in column six, a test result that strays from the first set of results but does not exceed control limits. [00:14:40] Speaker 00: Now, that was the agreed upon claim construction. [00:14:42] Speaker 00: And what's significant is that that tells us what an outlier is, but not how to determine an outlier. [00:14:48] Speaker 00: The how we see for the first time in the new construction that is proposed by in-depth in their briefing where they come up with a series of rules. [00:14:56] Speaker 00: Those rules were nowhere mentioned in the district court proceedings, and they weren't [00:15:00] Speaker 00: proposed in any claim construction by in-depth and in fact they don't really follow the specification either because the specification tells us as we've seen that outliers can be designated in any suitable manner and There's no particular order of steps There's nothing in the specification says that an outlier must be determined in the order of steps that are specified in in-depth new construction with these rules in-depth rules also mention a threshold and [00:15:28] Speaker 00: There's no requirement that a threshold be used to determine an outlier. [00:15:31] Speaker 00: If we look at figure 9 of the patent, which visually illustrates what an outlier is, it shows a threshold for the control limits, but it shows no threshold for the outliers. [00:15:42] Speaker 00: They simply stand out by the fact that they deviate from the bulk of the results by some amount. [00:15:47] Speaker 00: There's no specific threshold set there. [00:15:49] Speaker 00: So what that means is that there is nothing in the claim that renders this non-abstract or unconventional. [00:15:57] Speaker 00: And so when I heard counsel talk about the various applications of outliers, how it can be used to improve the manufacturing process, those are all unclaimed applications. [00:16:07] Speaker 00: And an unclaimed element cannot supply either the non-abstractness or the unconventionality. [00:16:12] Speaker 02: There has to be. [00:16:13] Speaker 02: And is it now, as the case comes to us, agreed between the parties that we can look at [00:16:20] Speaker 02: claim one, and if claim one is ineligible end of matter? [00:16:26] Speaker 00: Yes, Your Honor. [00:16:27] Speaker 00: That's how the district court treated it. [00:16:28] Speaker 02: And there's no separate argument in the blue brief that even if claim one went down, other claims survived, right? [00:16:34] Speaker 00: That's correct, Your Honor. [00:16:37] Speaker 00: Yes, it all stands and falls on claim one. [00:16:41] Speaker 00: With respect to the assertion by in-depth that they were the first ones to do something other than pass-fail kind of testing, [00:16:49] Speaker 00: The specification tells us that that's not true. [00:16:53] Speaker 00: We look at column one, the background of the invention, there's a description, column one, line 30, through line 42, of what many semiconductor companies used to do. [00:17:04] Speaker 00: And one of the things that it talks about is that the data they collected may be analyzed to identify common deficiencies or patterns of defects [00:17:13] Speaker 00: or identify parts that may exhibit quality and performance issues, and to identify or classify user-defined good parts. [00:17:21] Speaker 00: So these aren't merely parts that don't operate or fail a control limit, but these are user-classified or user-defined good parts, what the semiconductor field used to call known good parts. [00:17:31] Speaker 00: So sometimes they would take their parts as they come off the assembly line and say, well, this one's good for a high-speed part, this one's good for a medium-speed application, this one's good for a low-speed. [00:17:40] Speaker 00: And the specification is describing that in claim one. [00:17:43] Speaker 00: And so we have everything we need here in the specification to confirm the conventionality of each of the elements of the claim. [00:17:51] Speaker 02: Is the portion that you just read understandable as just applying slightly different definitions according to different circumstances of a simple binary pass-fail? [00:18:04] Speaker 02: If I understand your Honor's question, I think what it's talking about is a good part is a pass, a non-good part is a fail. [00:18:11] Speaker 00: I don't believe that's correct your honor because of the user-defined and the reference to performance issues So parts that fail are parts that are inoperable you wouldn't be able to sell them there are parts that Meet the control limits, but which still may be for quality reasons something that you don't want to sell So it's an operable part, but it may be of lower quality and that's the reference to user-defined good parts that that was a known term in the semiconductor field [00:18:49] Speaker 00: If the panel has no further questions, that will complete our argument. [00:18:53] Speaker 03: Let me just, on your last point, see if I understand. [00:18:56] Speaker 03: You said that the term user-defined good parts was used in the semiconductor field. [00:19:03] Speaker 03: To refer to what, exactly? [00:19:06] Speaker 00: To refer to parts where they pass the control limit. [00:19:11] Speaker 00: It has to be able to stay below a certain temperature while it's being tested. [00:19:16] Speaker 00: The part might stay below that temperature, but it still might be too close to that value. [00:19:21] Speaker 03: So in other words, the equivalent, at least as you see it, of outlier. [00:19:27] Speaker 03: Yes. [00:19:27] Speaker 03: Thank you. [00:19:29] Speaker 01: Thank you, counsel. [00:19:30] Speaker 03: Thank you, Garner. [00:19:31] Speaker 01: Mr. Greenspoon has some bottle time. [00:19:48] Speaker 04: Thank you, Your Honors. [00:19:49] Speaker 04: I would agree with the suggestion of yourself, Judge Bryson, and Judge Toronto that that column one excerpt is merely setting up the generalized problem of semiconductor testing, of separating the good from the bad. [00:20:00] Speaker 04: That's the old, unsophisticated way. [00:20:03] Speaker 04: So you can test for good. [00:20:05] Speaker 04: You can test for bad. [00:20:06] Speaker 04: But column one doesn't suggest that the folks in the art were doing anything other than that. [00:20:11] Speaker 04: There was nothing in between. [00:20:12] Speaker 02: There's no good enough for government work. [00:20:16] Speaker 04: But there is no concept of medium enough for government work. [00:20:23] Speaker 04: So then going back to some of the things said, first off, I'd like to refer your honors to appendix page seven. [00:20:29] Speaker 04: It's actually connected to the blue brief. [00:20:31] Speaker 04: That's where the claim construction is. [00:20:34] Speaker 04: And I would certainly dispute the idea that we're rewriting the claim construction or trying to come up with a new one. [00:20:41] Speaker 04: In the middle of appendix page seven, and OK, if you're ready, I'll start, the court says the written description of the patent defines outlier as a test result that strays from a set of test results that did not exceed the control limits specified for the tested component or otherwise fail and that have statistically similar values [00:21:03] Speaker 04: So one of the concepts or notions baked into this claim construction left out by my brother is the notion at the very end, statistically similar values. [00:21:13] Speaker 04: That's where we get it's intrinsic to this claim construction that there's a statistic baseline that's taken first. [00:21:21] Speaker 04: Everything outside that level of statistical similarity is considered a stray. [00:21:28] Speaker 04: Strays that fail the control limits fail. [00:21:30] Speaker 04: Strays that pass the control limits are considered outliers. [00:21:34] Speaker 04: That's why this is all jumping straight off the page of just the words that I read to your honors. [00:21:41] Speaker 04: And then the only final point I'd like to make for your honors, unless there are any further questions, [00:21:46] Speaker 04: Unconventionality absolutely, under this court's case law, is significant for Alice, step one. [00:21:53] Speaker 02: And the particular cases that are best to illustrate this are Finjen, McRoe, and even- Can I just ask you on the statistically, what is the term, the statistically similar values, [00:22:15] Speaker 02: I see. [00:22:15] Speaker 02: That's in the first part of the column six citation. [00:22:19] Speaker 02: Sorry. [00:22:19] Speaker 02: OK. [00:22:20] Speaker 02: Not in the second part of it. [00:22:23] Speaker 04: That perhaps is true. [00:22:24] Speaker 04: There are two paragraphs in column six that are to be read together. [00:22:27] Speaker 02: The first part of that is statistically similar. [00:22:28] Speaker 02: Sorry. [00:22:28] Speaker 04: That's right. [00:22:29] Speaker 04: So back to unconventionality within ALICE step one, it's absolutely a factor. [00:22:34] Speaker 04: And I just mentioned Finjen, McRoe, and even electrical power group. [00:22:38] Speaker 04: Even though that was a case that eventually ended up with a holding of no eligibility, the reason why there was no eligibility under electric power group was because there was no, quote unquote, new technology for performing the analysis function. [00:22:57] Speaker 04: That's at the very end of the section describing ALICE step one in that opinion. [00:23:01] Speaker 04: So the necessary implication is if there had been a new technology for performing the analysis function that's describing that decision, then there would have been a stronger consideration for passing ALICE step one. [00:23:15] Speaker 03: Let me ask you, what exactly do you think or do you take from the word statistically similar values? [00:23:24] Speaker 03: What does that suggest to you? [00:23:26] Speaker 04: Personally, I thought the [00:23:27] Speaker 04: The grammar of the district court was a little bit, it hangs together, but it took a little bit of time to read through it and understand it. [00:23:35] Speaker 04: And I had to understand it in the context of the cited column six. [00:23:39] Speaker 04: So if you read column six, it's very clear on what's going on at the detail level, which is there is a statistical threshold. [00:23:48] Speaker 04: So the deepest detail that could be one standard deviation, [00:23:53] Speaker 04: among, let's say, the 2,000 samples in the trial could be two standard deviations. [00:24:00] Speaker 03: In other words, what you're saying, I take it, is that the statistically similar values term means departing from the ideal [00:24:12] Speaker 03: group of results by a certain amount. [00:24:15] Speaker 04: Oh, it's not ideal. [00:24:16] Speaker 04: It's empirical. [00:24:17] Speaker 04: It's on a case-by-case basis. [00:24:18] Speaker 03: Well, when I say ideal, I mean you have three sets of test results. [00:24:23] Speaker 03: You have the succeeds fabulously results. [00:24:27] Speaker 03: You have the fails miserably. [00:24:29] Speaker 03: And then you have the outliers in between. [00:24:33] Speaker 04: If I may adjust what your honor said just in one way, you don't start knowing that it's going to be free energy. [00:24:39] Speaker 03: I understand that. [00:24:40] Speaker 03: But when you finish your testing, [00:24:42] Speaker 03: You end up with, as I understand it, you end up with three categories. [00:24:46] Speaker 04: That's right. [00:24:47] Speaker 03: One is ready to go, nil, spec, perfect, whatever you want, or within the tightest of your controls. [00:24:55] Speaker 03: You have another group that have failed altogether. [00:24:59] Speaker 03: Correct. [00:24:59] Speaker 03: And you have a third group, which are outliers. [00:25:02] Speaker 03: And by virtue of your defining them as outliers, that is to say they are not in either one of those groups, [00:25:10] Speaker 03: then that becomes a statistically similar value for each of those outliers. [00:25:17] Speaker 03: Isn't that correct? [00:25:18] Speaker 04: Not at all. [00:25:19] Speaker 03: Well, nothing you just said told me why that isn't an accurate description of the outliers and the term statistically significant. [00:25:30] Speaker 03: So help me with that. [00:25:31] Speaker 04: OK. [00:25:32] Speaker 04: If you have 2,000 samples that you apply statistics to, [00:25:37] Speaker 04: You do some math and you get a standard deviation. [00:25:40] Speaker 04: I'll use that as an example. [00:25:41] Speaker 04: You have to use all 2,000 samples to know what the standard deviation is for the whole group. [00:25:47] Speaker 04: What this is saying, if that's going to be your threshold, this is saying if it passes but it's above one sigma, we're going to call that an outlier. [00:25:55] Speaker 04: And we're going to call it a stray also, but that's another term. [00:25:59] Speaker 04: So it's a stray slash outlier. [00:26:02] Speaker 03: OK. [00:26:02] Speaker 04: So that's why the, let's say, all the white triangles together in figure nine, you wouldn't bunch them together and say, oh, those are statistically similar to one another. [00:26:11] Speaker 04: The statistic similarity exercise is first done on the 2000 samples. [00:26:16] Speaker 03: I understand that. [00:26:16] Speaker 03: But when you finish, you have three sets of results, one, [00:26:23] Speaker 03: within the narrow confines, two outside of the broad confines, and three, whether it's one sigma or more, is what's left. [00:26:34] Speaker 03: And those are by definition, I take it by your description that you just gave me, statistically significant values [00:26:46] Speaker 04: Right if I may your honor of course Those those are significant in the context of the machinery and what you can do with them all right. [00:26:59] Speaker 01: Yes Thank you