[00:00:00] Speaker 06: All right. [00:00:00] Speaker 06: The next argued case is number 20-2209, Custom Play LLC, against Amazon.com, Incorporated. [00:00:09] Speaker 06: Mr. Carey, your turn. [00:00:12] Speaker 04: Thank you very much, Your Honor, and may it please the Court. [00:00:14] Speaker 04: John Carey for the Appellant Custom Play LLC. [00:00:18] Speaker 04: We have in this appeal, Your Honor, the same constitutional and statutory violations that were argued in conjunction with the companion appeals. [00:00:28] Speaker 04: Am I safe to assume that the court doesn't want to hear a particularized argument about those issues in this argument? [00:00:35] Speaker 01: We can refer back to the companion argument. [00:00:42] Speaker 01: Thank you. [00:00:43] Speaker 01: Thank you, Your Honor. [00:00:46] Speaker 04: This case also presents another common issue. [00:00:50] Speaker 04: This case shares the video clip issue that was present in the case involving the 950 patent. [00:01:01] Speaker 04: This case involves the 282 patent. [00:01:04] Speaker 04: And here, the board's decision on claims seven, eight, and 18. [00:01:12] Speaker 04: There's several other claims that are being [00:01:20] Speaker 04: that are at issue in this appeal, but those three claims are, the board's decision on those three claims are predicated on the board's finding that Abacus's segment constitutes the 282 patents video clip. [00:01:40] Speaker 04: And so the board looked to Abacus to supply that video clip recitation [00:01:48] Speaker 04: and claim limitation in each of claims 7, 18, 8, and 18. [00:01:52] Speaker 04: And so if the board was wrong about that, as it was in the 950 patent case, then it's like it's decisions on claims 7, 8, and 18 in the 282 patent failed for the same reasons. [00:02:11] Speaker 04: There was no other prior art cited in support of the board's findings on that [00:02:17] Speaker 04: claim limitation for resuming the playing at the beginning of a video clip other than abacus disclosure of a segment, which is not a video clip based upon the definitions of video clip given in the 282 patent and the definition of segment given in the abacus reference patent. [00:02:36] Speaker 04: In the interest of the court's time, I would like to incorporate by reference my arguments on [00:02:46] Speaker 04: the reference patent and the claim term, the claim term video clip is defined in both 950 patent and the 282 patent specifications, identically the same in the specifications. [00:02:58] Speaker 04: And so the argument is applicable here as it was over there in the other case. [00:03:03] Speaker 04: I just don't want to [00:03:06] Speaker 04: waste the court's time by repeating those arguments, although I am cognizant of the fact that this argument is being separately recorded and is a record of this appeal. [00:03:14] Speaker 06: And so... It's the same specification. [00:03:19] Speaker 06: And so what would be helpful to us, I think, would be to point out if there are any distinctions on which you're relying and the claims are in the prior arch. [00:03:30] Speaker 04: Yes. [00:03:30] Speaker 04: Well, for claims seven, eight, and 18, Your Honor, the board [00:03:36] Speaker 04: relied exclusively on abacuses. [00:03:39] Speaker 04: Abacuses is a reference to segments to satisfy the 282 patents limitation for video clip. [00:03:48] Speaker 04: And for the reasons that I argued in the companion case, we believe that was unsupportable by the record evidence, which principally consists of the reference patent itself. [00:03:59] Speaker 04: The text of the reference patent is the most important evidence here. [00:04:04] Speaker 04: It defines the term being used particularly, and we submit that definition must be followed. [00:04:12] Speaker 04: Likewise, the patent under review, the 282 patent specifically defines video clip in its specification. [00:04:21] Speaker 04: The inventor was a lexicographer and attached multiple qualifications to what needs to be in a video clip for it to count as a video clip, and those things are not [00:04:34] Speaker 04: required by by abacus is a segment in the reference in the reference patent so it was error for the board to equate the two uh and these arguments are are substantially the same as in the companion case involving the 950 so i won't dwell on that anymore there are some other arguments regarding other claim limitations that we've raised on this appeal um relating to different grounds and i would like to speak briefly about about that unless any member of the panel has [00:05:04] Speaker 04: Further questions on this video clip issue? [00:05:08] Speaker 06: Now let's go to the other arguments. [00:05:10] Speaker 04: Thank you, Your Honor. [00:05:12] Speaker 04: Okay, so there's a common limitation in claims four, seven through nine, 14, 16, and 19 of the 282 patent. [00:05:27] Speaker 04: And that limitation is retrieving a video frame identifier responsive to a request location. [00:05:34] Speaker 04: And the board relied on Armstrong for that, at least in part. [00:05:50] Speaker 01: It cited the Armstrong reference patent. [00:05:55] Speaker 04: It cited the disclosure there, appendix 1915, about Armstrong. [00:06:03] Speaker 03: plays back a frame counter, and if... Just to be clear, on this limitation, the alternative relied on Armstrong, Rangan, and Wreck-It, right? [00:06:14] Speaker 03: That is correct, Your Honor. [00:06:16] Speaker 03: So for... Do you have to prevail on each one of those? [00:06:20] Speaker 04: For these claims, yes. [00:06:22] Speaker 04: That's why, as I said, I think, you know, for claims seven, eight, and 18, we don't. [00:06:27] Speaker 04: We only need the video clip issue. [00:06:31] Speaker 04: that's predicated on the advocacy reference for Claim 7, 8, and 18. [00:06:35] Speaker 04: Now I'm talking about other claims which there are a variety of grounds of rejection and there isn't just one reference that was cited for this particular limitation that I'm talking about. [00:06:46] Speaker 04: So for this particular limitation about retrieving a frame identifier responsive to a request location, there were a couple of references cited. [00:06:55] Speaker 04: But we submit each of those don't disclose this limitation. [00:06:59] Speaker 01: As it relates to Armstrong, [00:07:01] Speaker 01: Pardon me, Your Honor. [00:07:06] Speaker 04: In Armstrong, the teaching is that the system there counts the frames, and when the frame number matches a predetermined range of frames, then a menu structure is presented. [00:07:27] Speaker 04: And I guess I should back up for a second and talk about [00:07:31] Speaker 04: What this patent is the heart of this patent. [00:07:34] Speaker 04: This patent, the 282 patent, is about presenting item information to a viewer while you're watching the video. [00:07:42] Speaker 04: So for example, the idea here is the specification discloses if a song is played during a movie, there could be an indication, like a musical note could appear, and the user could [00:07:59] Speaker 04: click on the musical note, and it would then identify the song. [00:08:03] Speaker 04: And you could even provide a link where the user could download the song through their media device. [00:08:09] Speaker 04: Or there could be an item of clothing worn by one of the performers in the video. [00:08:19] Speaker 04: And that item, the piece of clothing, I think the specification talks about a dress or a hat, or it could be anything. [00:08:30] Speaker 04: You know, those items are identified, and then the user can interact with those items, and potentially, if the capability is provided, link to and purchase those items. [00:08:40] Speaker 04: So this is real-time information about the video, and the system is trying to allow the user to interact with these things as it's watching the video. [00:08:57] Speaker 04: So the patent has a limitation for retrieving a frame identifier responsive to a request location while you're playing the video, okay? [00:09:08] Speaker 04: What Armstrong does, and this is taught in the Armstrong reference at pages, appendix pages 1914 and 1915, if the user hits the pause button, there can be, [00:09:26] Speaker 04: preselected images that are displayed in a menu structure. [00:09:30] Speaker 04: And the way that that happens is disclosed in Armstrong at Appendix 1915. [00:09:35] Speaker 04: And so what Armstrong does is it runs a frame counter. [00:09:40] Speaker 04: And so as frames are being played in the video stream, it compares the frame count to a range of predefined frames. [00:09:50] Speaker 04: And when there's a match, there's a menu associated to the scene number for that [00:09:56] Speaker 04: for where that occurs. [00:10:00] Speaker 04: And then the menu gives the viewer all the items that are in the scene. [00:10:06] Speaker 04: So that's very different conceptually than the 282 patent. [00:10:15] Speaker 04: The 282 patent does this on a, and Amazon characterized it this way, by the way, on a frame by frame basis. [00:10:23] Speaker 04: Not a scene basis, but a much more [00:10:26] Speaker 04: much more specific basis. [00:10:29] Speaker 04: By doing it the 282 patent way, where you have identify a frame identifier responsive to a request location, the system ends up presenting the item information associated with that frame. [00:10:44] Speaker 04: Other limitations go on to be able to associate it with items at that frame or items in a predefined like 10 second range. [00:10:53] Speaker 04: But what Armstrong does [00:10:54] Speaker 04: is it counts where the user is in the video, and if he or she pushes pause, it doesn't give you a frame identifier in response to the play location, which is the frame where you pause. [00:11:10] Speaker 04: It gives you a menu based upon the whole scene, and then it shows you all the items in that scene, which the reason that the 282 patent system is superior [00:11:24] Speaker 04: improved architecture is that people probably, you know, if you show everybody all the items in the whole scene, the user's probably seeing a lot of things that really didn't motivate them to click the button in the first place. [00:11:37] Speaker 04: They clicked the button most likely for what they were looking at, not for every single item in the whole scene. [00:11:43] Speaker 04: And so the 282 methodology is designed to be frame accurate, and the claims are drafted to accomplish that. [00:11:51] Speaker 04: And Armstrong, [00:11:54] Speaker 04: has a different approach and carries out its approach in a different way with a different architecture. [00:12:00] Speaker 04: But for the purposes of this limitation, and this is borne out in this limitation, so the limitation is retrieving a video frame identifier responsive to a request location. [00:12:10] Speaker 04: Well, in Armstrong, the menu that's presented is not responsive to the request location. [00:12:21] Speaker 04: It's related to the entire scene, not the particular location where the user pressed the button. [00:12:28] Speaker 01: And so for that reason, Armstrong doesn't disclose this limitation. [00:12:37] Speaker 04: So in the patent, the way the claim is phrased, there's a relationship claim between the term video frame identifier and the location. [00:12:50] Speaker 04: The location has to be responsive to the frame. [00:12:53] Speaker 04: The frame identifier has to be responsive to the location. [00:12:55] Speaker 04: That doesn't happen in Armstrong because the identifiers aren't responsive to the location. [00:13:01] Speaker 04: And it's an important distinction, and that's why I was sort of explaining the overall differences between how the two systems operate, because one of the next steps that's claimed is displaying information associated with the frame identifier. [00:13:19] Speaker 04: Right? [00:13:20] Speaker 04: So, in the patented system, you retrieve a frame identifier responsive to a particular request location, and then you display the item information associated with that frame identifier. [00:13:34] Speaker 04: So, as I was saying, in the patented system, you get the item that you wanted to see. [00:13:39] Speaker 04: You get the item shown to you that corresponds to the request location that you hit the button on. [00:13:46] Speaker 04: In Armstrong, you don't get that item. [00:13:49] Speaker 04: You may get that item, but you'll get, you know, potentially a bunch of other items as well, which you didn't ask for. [00:13:57] Speaker 02: And so... So what? [00:13:58] Speaker 02: The board said it discloses it even though it provides more than the claim requires. [00:14:06] Speaker 04: Well, one is an improvement on the other, Your Honor. [00:14:09] Speaker 04: And so the patented system requires for control at a much more granular level than the prior art system. [00:14:17] Speaker 04: And so that's the so what. [00:14:19] Speaker 04: It's a... Where does the client say that you can't retrieve other information? [00:14:23] Speaker 04: Well, it says that... Where it says it is in this limitation. [00:14:29] Speaker 04: It says it by having the frame identifier must be responsive to the request location. [00:14:36] Speaker 04: And in Armstrong, the menu isn't responsive to the request location. [00:14:42] Speaker 01: That's how it's different. [00:14:50] Speaker 04: In Armstrong, it's giving you items for the scene in which the request location was in. [00:14:57] Speaker 04: But that scene could be five minutes long. [00:14:59] Speaker 04: And so it's not responsive to the particular request location where you press the button. [00:15:05] Speaker 04: And typically, the patent talks about how you express request location. [00:15:11] Speaker 04: And the prime example of how to express a request location is using the hours, minutes, second frame format. [00:15:19] Speaker 04: that's discussed in the background of the invention. [00:15:21] Speaker 04: So it's identifying the request location by a firm number, not a scene number. [00:15:29] Speaker 04: A request location is a very particular point within the video where the request was made. [00:15:38] Speaker 01: So that's the gist of that argument. [00:15:46] Speaker 04: There's a different limitation at issue called, and this limitation is a different limitation. [00:16:01] Speaker 04: It's contemporaneously retrieving a second frame identifier that's different from the first identifier, and that's responsive to a location that's prior to the request location. [00:16:12] Speaker 04: And that's a lot of words. [00:16:13] Speaker 04: And at first blush, it sounds kind of confusing, perhaps. [00:16:17] Speaker 04: But when you understand, when you read the specification, you understand what they're getting at here. [00:16:21] Speaker 04: This is getting at the situation I alluded to just a few minutes ago, where the inventor is saying, OK, I'm watching the video. [00:16:29] Speaker 04: I get to a point. [00:16:30] Speaker 04: I click the button. [00:16:31] Speaker 04: I want to get that. [00:16:32] Speaker 04: I want to see that dress. [00:16:36] Speaker 04: And so the first limitation accomplishes that for you. [00:16:40] Speaker 04: It sets up the accomplishment of that. [00:16:42] Speaker 04: The second limitation recognizes that people might not be able to press the buttons that fast. [00:16:49] Speaker 04: Okay. [00:16:50] Speaker 04: And so it may take them a few seconds to press the button after they realize they want more information about an item they saw in the video. [00:17:00] Speaker 04: So in this aspect of the invention, there's this additional limitation for a second frame identifier that's responsive to a location prior to the request location. [00:17:12] Speaker 04: And the specification describes that the way that this will typically be implemented is that the system will pre-define a period of time before when the button is pushed that will be included there. [00:17:31] Speaker 04: The way this works is, in practice, as described by the invention, the viewer presses a button. [00:17:38] Speaker 04: That is the request location. [00:17:40] Speaker 04: You get a frame identifier correlated to that point in the video, and you get an item information in that frame. [00:17:48] Speaker 04: Also, in claims where this other limitation is recited, you'll also get a second frame identifier [00:17:55] Speaker 04: that corresponds to a location in the video before the location where you push the button. [00:18:01] Speaker 04: So let's say that the producer of the system predefined it to be 10 seconds. [00:18:07] Speaker 04: So you'll get a frame identifier that gives you the item information for items that show up in the frame where you push the button and in the 10 second and in all the frames consisting of the 10 seconds before you push the button. [00:18:21] Speaker 04: The idea here is to enable the viewer to get the item information if they didn't press the button exactly at the right time. [00:18:31] Speaker 04: Now, this limitation, this idea didn't, you know, wasn't part of any of these references. [00:18:39] Speaker 04: None of these other inventors were seeking to accomplish this goal. [00:18:42] Speaker 04: And so they don't really have anything that comes close to meeting the specific requirements of this limitation. [00:18:53] Speaker 01: The board relied on Bergen. [00:18:57] Speaker 04: Bergen retrieves frame identifiers without regard to where they are in the video. [00:19:06] Speaker 04: So there's an example given of the baseball player and you're watching a video of a baseball game and you click on an indication that is attached to the player. [00:19:21] Speaker 04: And then they'll present, the system presents a storyboard to you. [00:19:25] Speaker 04: Everywhere in the game, the player appears. [00:19:27] Speaker 04: So if the player was in the period, if you've made the request while you were watching the fifth inning of the game, it'll show you if there was a depiction of the player in the third inning or in the eighth inning. [00:19:40] Speaker 04: It'll all be shown to you. [00:19:44] Speaker 04: The board relied upon Bergen's teaching of that to say that that, [00:19:51] Speaker 04: disclosed retrieving a second video frame identifier responsive to a prior location. [00:19:58] Speaker 04: And the reason that that finding is not supported by substantial evidence and is wrong is because Bergen doesn't calculate the frame identifiers associated with the baseball player in that example based upon it being prior to [00:20:18] Speaker 04: the request location. [00:20:19] Speaker 04: It just pulls up all the instances of the baseball player. [00:20:23] Speaker 04: And so there's no limitation that Bergen retrieves you the second frame identifier responsive to a location before you made the request. [00:20:35] Speaker 04: That's the difference there. [00:20:36] Speaker 04: Of course, now in practice, the reason why abacusis is an improvement on Bergen is, you know, [00:20:46] Speaker 04: If you're watching a video, let's use the Bergen example of a baseball game. [00:20:50] Speaker 04: If you're watching a baseball game and you stop it in the fifth inning because you want to read whatever information is associated with that player, the Bergen system will show you all the scenes where that player is. [00:21:08] Speaker 04: So if there's a scene of that player celebrating at the end of the game because they won the game, [00:21:14] Speaker 04: and there's a scene in that video of the player jumping up and down celebrating the victory. [00:21:19] Speaker 04: That'll be presented to you while you're watching the video. [00:21:22] Speaker 04: Abacus is limiting what he plays back to the viewer to things that happened before the point that you requested information so as to not spoil the ending, whether it be a movie, a game, or whatever kind of video it is. [00:21:37] Speaker 04: So, you know, the Abacus system is based upon [00:21:42] Speaker 04: obtaining information about items in the video at a request location and also in this limitation, items that fit a predefined prior period of play of the video. [00:21:56] Speaker 04: And the prior art does not teach doing it on a basis whereby it's prior to the request. [00:22:07] Speaker 04: The Bergen Prior Art System [00:22:11] Speaker 04: doesn't consider when the prior video frame, whether the other video frame identifier occurs before or after the request location. [00:22:23] Speaker 04: It's indifferent to the relationship between the second frame identifier and the first frame identifier. [00:22:30] Speaker 04: The claim language though requires the second frame identifier to be responsive to a location that's prior to the request, and that's the difference. [00:22:40] Speaker 06: Okay. [00:22:41] Speaker 06: Thank you, Mr. Kerry. [00:22:42] Speaker 06: We'll save your rebuttal time. [00:22:44] Speaker 06: These explanations are helpful. [00:22:46] Speaker 06: Now let's hear from the other side. [00:22:47] Speaker 06: Mr. Heideman. [00:22:50] Speaker 05: Thank you, Your Honor. [00:22:51] Speaker 05: May it please the court. [00:22:52] Speaker 05: I'd like to begin by responding to a couple of the things made about abacuses in this IPR. [00:23:02] Speaker 05: And Custom Place Council is right. [00:23:04] Speaker 05: That prior art reference, the abacus' prior art reference, was a prior art application by the same inventor, and it describes the same resuming step, just using slightly different terminology. [00:23:17] Speaker 05: The question here is not whether the two, the [00:23:20] Speaker 05: 282 patent and the prior art abacus patent define video clip and segment in the same way. [00:23:27] Speaker 05: It's whether the prior art abacus patent discloses a video clip as defined here, and it clearly does. [00:23:35] Speaker 05: In the 282 proceeding that we're discussing now, the board relied on Amazon's expert testimony, found that testimony to be credible and persuasive. [00:23:45] Speaker 05: That's in the appendix of page 42. [00:23:47] Speaker 05: And again, custom place expert offered no opinion to the contrary. [00:23:52] Speaker 05: In fact, in his deposition, he admitted that this limitation was old and the people of ordinary skill in the arts knew the benefits of resuming at the beginning of a video clip. [00:24:03] Speaker 05: And that's in the... This just seems to be repeating what we've already heard. [00:24:07] Speaker 05: It's a little bit repetitive, but there's slightly different evidence in each of the two proceedings. [00:24:11] Speaker 05: So I just wanted to give the appendix sites for the evidence in the 282 proceeding. [00:24:18] Speaker 05: And so the custom place expert admission on that is in the appendix at 2492. [00:24:24] Speaker 05: I should also note that in this proceeding, the board relied on the background knowledge of a person of ordinary skill in the art and cited other evidence for that as well. [00:24:37] Speaker 05: Turning now to the arguments with respect to the Armstrong reference, I think there's some, [00:24:45] Speaker 05: misunderstanding about how Armstrong was applied to the claim limitation that's challenged here. [00:24:52] Speaker 05: The claim limitation simply recites retrieving a first video frame identifier that is responsive to the request location. [00:25:04] Speaker 05: And the board found that Armstrong discloses this limitation because it discloses retrieving a frame number that corresponds to the point of suspension. [00:25:15] Speaker 05: This is in the appendix at 26 to 27. [00:25:19] Speaker 05: And in general, the board relied on both parties' experts. [00:25:22] Speaker 05: And even Custom Place expert agrees that Armstrong discloses a video frame number, which is a video frame identifier, and that that video frame number is responsive to the request location that's in the appendix at 2517 at lines 4 to 15 of his deposition testimony. [00:25:43] Speaker 05: Custom Place Council argued that Armstrong doesn't disclose this because it identifies the menu before determining the point of suspension. [00:25:53] Speaker 05: That's incorrect, as the board explained in the appendix at 31 to 32. [00:26:01] Speaker 05: The board relied on Armstrong's disclosures and Amazon's expert and explained that Amazon's expert provided detailed testimony that was more credible and deserving of greater weight than Custom Place [00:26:12] Speaker 05: expert testimony, which was conclusory and unsupported. [00:26:16] Speaker 05: That's in the appendix 32. [00:26:20] Speaker 05: And again, custom place expert agreed with essentially the board's analysis. [00:26:25] Speaker 05: Their expert admitted that Armstrong first identifies the frame count and the frame count is then used to determine which menu structure to display. [00:26:35] Speaker 05: That's in the appendix 2516 to 2517. [00:26:41] Speaker 05: And so Armstrong clearly discloses retrieving a first video frame identifier, a frame number that's responsive to the request location, which is the point of suspension. [00:26:51] Speaker 05: Custom Place Council seems to be arguing about the menu that's displayed that's not part of this limitation. [00:26:59] Speaker 05: But in any event, the menu that's displayed is frame specific and responsive to the request location as well. [00:27:08] Speaker 05: Armstrong does not limit the frame, the menu that's displayed to the scene as Custom Place Council suggests. [00:27:16] Speaker 05: There is also a reference to the fact that the menus in Armstrong are preselected. [00:27:23] Speaker 05: As the board noted in the appendix at page 32, an oral argument custom place counsel admitted that nothing in the claim precludes information from being preselected or pre-mapped to the video frames. [00:27:36] Speaker 05: In fact, that's how all of these systems work. [00:27:39] Speaker 05: There's some mapping done ahead of time by the author of the video who looks at the frames of the video on the one hand and maps it and links it to the supplemental content that will be displayed. [00:27:49] Speaker 05: for each video frame. [00:27:51] Speaker 05: Of course, that process is done ahead of time. [00:27:53] Speaker 05: And in that sense, the information that's being displayed, whether it's a menu or a background frame, is preselected. [00:28:00] Speaker 05: But the claim refers to what happens when the viewer is actually viewing the video and makes a request for information. [00:28:07] Speaker 05: And at that time, it's clear that Armstrong discloses exactly what the claim requires, which is determining or identifying the request location, which is the point of suspension, [00:28:17] Speaker 05: and then retrieving a video frame identifier responsive to that request location, which is the frame number. [00:28:24] Speaker 05: Then it goes on to the next step and retrieves the menu that's associated with that frame number and the background frame that may or may not be responsive to that request location. [00:28:34] Speaker 01: In this instance, the background frame is actually responsive to a different frame number, which is exactly what the claim requires. [00:28:45] Speaker 01: And the board found that Armstrong's menu is frame-specific. [00:28:49] Speaker 05: That's referred to in the appendix at 31 and described in Armstrong in the appendix at page 1903 in Figure 2A and at 1912 in Paragraph 31. [00:29:01] Speaker 05: So the board's finding here that Armstrong discloses this limitation, the limitations that is being challenged. [00:29:08] Speaker 05: was supported by substantial evidence, including Amazon's expert testimony and the testimony of Custom Play's expert. [00:29:25] Speaker 05: With respect to the Bergen ground, as Custom Play's counsel concedes, Bergen discloses retrieving, and I'll discuss it in the context of the baseball example as well, [00:29:36] Speaker 05: If you're watching a recording of a World Series game from 10 years ago, Bergen discloses that you can click on an object such as the baseball player, and it will search for and return other scenes that have that baseball player in it, present the results of that search in a storyboard form, and then the user can click on those scenes and view the different clips, whether he's at bat in the third inning or the eighth inning. [00:30:02] Speaker 05: And there's no question that that disclosure [00:30:05] Speaker 05: discloses exactly what the claim limitation requires, which is retrieving a second video frame identifier, a frame number associated with a prior scene. [00:30:14] Speaker 05: It's different from the first video frame identifier. [00:30:17] Speaker 05: Here, the first video frame identifier is a frame for the fifth inning. [00:30:23] Speaker 05: The second video frame identifier has to be responsive to a location that is prior to the request location. [00:30:28] Speaker 05: So if you make the request during the fifth inning, the system is going to retrieve a video frame identifier for the fifth inning, a video frame identifier for the third inning, and it will contemporaneously display information related to both of those play locations. [00:30:42] Speaker 05: That's all the claim limitation requires, and that's exactly what Bergen discloses. [00:30:48] Speaker 05: Custom play takes issue with the fact that Bergen also discloses [00:30:53] Speaker 05: retrieving video frame identifiers from after the request location, but the claim doesn't preclude that in any way. [00:31:00] Speaker 05: There's nothing in the claim language that precludes retrieving a third, fourth, fifth, or tenth video frame identifier, and there's nothing that precludes those additional video frame identifiers from being before or after the play location or the request location. [00:31:16] Speaker 05: The board expressly rejected this argument in the appendix of page 16, [00:31:21] Speaker 05: And the only support Custom Play had offered was its one paragraph statement from its expert that the board found to be conclusory. [00:31:30] Speaker 05: Essentially, what they're trying to do is import a negative limitation into this claim. [00:31:34] Speaker 05: And it's nowhere in the claim language, and that's why in their brief, they're resorting to their reference to what they believe is the clear intent of the patent owner to limit the claim. [00:31:47] Speaker 05: That's pure attorney argument. [00:31:48] Speaker 05: There's nothing in the claim language, the specification or the prosecution history that supports such a limitation. [00:31:59] Speaker 01: In their reply brief, they argued that such a limitation should be imported because otherwise [00:32:05] Speaker 05: You would spoil a video by returning results from after that location. [00:32:10] Speaker 05: And, of course, that assumes the video is something that can be spoiled. [00:32:14] Speaker 05: Even in Mr. Carey's example that he just gave of a video with music, you can imagine that someone would, when they make a request, would want to see the music for the video, whether it's before or after. [00:32:26] Speaker 05: It's not going to spoil the video simply because it returns information that may come up later. [00:32:31] Speaker 05: And the patent specifically says that the videos that are claimed here are not just movies, but they can be news, sports, commercials, or any of a number of different types of programming. [00:32:42] Speaker 05: And that's in the appendix of 125, column five, lines 58 to 63. [00:32:48] Speaker 05: So their arguments seem to be based on the theory that patent is limited to this movie embodiment when it's clearly not so limited. [00:33:00] Speaker 01: Unless the court has any questions, I will see the remainder of my time. [00:33:05] Speaker 06: Any questions from the panel? [00:33:09] Speaker 06: Okay. [00:33:09] Speaker 06: Thank you, Mr. Eidemann. [00:33:11] Speaker 06: Thank you. [00:33:12] Speaker 06: Well, let me also, again, thank the Director's Council for standing by. [00:33:17] Speaker 06: Let me ask the panel if you have any final questions on the constitutional issue that you would want to ask Mr. Meyer [00:33:30] Speaker 06: representing the director. [00:33:33] Speaker 02: No, thank you. [00:33:34] Speaker 02: No, thank you. [00:33:35] Speaker 06: Okay. [00:33:36] Speaker 06: Hearing none, we'll turn to Mr. Carey. [00:33:38] Speaker 06: You have the last word. [00:33:41] Speaker 04: Thank you very much, Your Honor, and I'll be very, very short in my concluding remarks. [00:33:46] Speaker 04: Let me respond first to Amazon Council's statements about the finding, the board's finding based upon Amazon's expert about [00:34:00] Speaker 04: advocacy segment, substituting 282 patents video clip. [00:34:08] Speaker 04: The argument was made that the expert opined upon that, and that that should be good enough. [00:34:16] Speaker 04: That's all we need. [00:34:17] Speaker 04: That's essentially their point on that. [00:34:19] Speaker 04: Don't look very close, just got an expert here, check the box, let's move on. [00:34:25] Speaker 04: If you look at the board's, what the board did with that, [00:34:30] Speaker 04: at page 42 of the appendix. [00:34:33] Speaker 04: They did credit Mr. Bobik's, Dr. Bobik, his testimony. [00:34:38] Speaker 04: They say they credited his testimony as to how he interprets the segment in abacus to correspond to the clip within the meaning of the 282 patent. [00:34:54] Speaker 04: What I want to emphasize is the board's finding about Dr. Bobik's testimony [00:34:59] Speaker 04: was only about Dr. Bobick's testimony of the abacus's patent. [00:35:07] Speaker 04: In other words, it wasn't Mr. Bobick opining on, you know, the state of the art in general or what other references might have done or things like that. [00:35:17] Speaker 04: No, it was very limited. [00:35:20] Speaker 04: The finding supporting these claims, the board's findings on seven, et cetera, [00:35:28] Speaker 04: is based upon Bavick's testimony exclusively about what abacassus' segment is. [00:35:36] Speaker 04: And so not about anything else, not about what Dr. Bavick thought about anything else other than what the abacassus disclosure taught. [00:35:45] Speaker 04: And where the abacassus reference itself defines that term segment so broadly, [00:35:53] Speaker 04: And clearly, without any of the qualifications given in the 282 patents definition video clip, Dr. Bobick's conclusion that abacus' segment is the same thing as the 282 patents video clip is unsupported. [00:36:09] Speaker 04: It relies on nothing else but his reading of that patent reference. [00:36:13] Speaker 04: It relies on nothing else but Dr. Bobick's reading of that abacus' patent, and Dr. Bobick ignores [00:36:22] Speaker 04: the explicit definition of segment used in that patent and attaches to it, narrowing qualifications in order to equate it to the video clip that simply aren't part of the term segment in the abacus reference. [00:36:40] Speaker 04: So because the only basis for the finding on this issue is Dr. Bobak's reading of abacuses, and because Dr. Bobak's reading of abacuses [00:36:51] Speaker 04: is at war with abacuses itself, conflicts with the very definition given a segment in abacuses and by comparison the definition in the 282 patent of CLIP, that holding shouldn't stand. [00:37:07] Speaker 04: Now, Amazon interestingly, this is something they did in the briefs on both this case and the companion case on video clip that they didn't do in connection with any of the other limitations that were argued. [00:37:19] Speaker 04: For this limitation, [00:37:21] Speaker 04: They argue, well, even if the abacusis reference segment doesn't constitute a video clip, there's some other references that we can point to that do. [00:37:32] Speaker 04: They don't do that anywhere else because they recognize that they're on thin ice here with the abacusis disclosure segment. [00:37:39] Speaker 04: And I submit that none of those other things that they claim to point to are good arguments. [00:37:46] Speaker 04: And we refute them in our reply brief. [00:37:49] Speaker 04: But the board never even reached them. [00:37:51] Speaker 04: The board never considered whether anything other than abacus disclosed the video clip limitation. [00:37:58] Speaker 04: So if anything, we don't think those other alternative arguments work for the reasons that we've addressed in our reply brief already. [00:38:07] Speaker 04: But at a minimum, if anybody's going to figure out whether or not those alternatives disclose the video clip, [00:38:17] Speaker 04: It ought to be the board in the first instance, because the board didn't determine that in the first instance. [00:38:22] Speaker 04: It didn't consider whether anything except abacuses satisfied the video clip limitation. [00:38:29] Speaker 04: And so at a minimum, this court should vacate the findings on those subset of claims and remand for the board to perhaps consider whether these alternative references that Amazon is pointing to might possibly show the video clip. [00:38:44] Speaker 04: But they recognize [00:38:46] Speaker 04: with their fallback argument that they're on thin ice with respect to this particular issue. [00:38:56] Speaker 04: The other limitation that was mentioned about the prior to the request location, the limitation is in the claim language itself, the words prior to, which is the claim language. [00:39:13] Speaker 04: You gotta give meaning to the words prior to, [00:39:16] Speaker 04: And prior to can't mean after, gotta mean before. [00:39:20] Speaker 04: And so that's the support for our claim interpretation of what that limitation is limited to. [00:39:28] Speaker 04: If you're gonna give effect to the actual claim language, then it's gotta only be prior to, because that's what's used, prior to. [00:39:34] Speaker 04: Prior to cannot and does mean, it does not and cannot mean afterwards. [00:39:40] Speaker 04: So with that, if there are any further questions, [00:39:44] Speaker 04: I'm very happy to entertain them, but I really appreciate the court's time across all these companion cases. [00:39:49] Speaker 04: I know it's a lot here, a lot of references, free patents, and some of this gets very technical and I'm, I appreciate it. [00:39:58] Speaker 06: Thank you. [00:39:59] Speaker 06: Any more questions for counsel? [00:40:02] Speaker 02: No. [00:40:03] Speaker 02: Okay. [00:40:04] Speaker 06: In that case, our thanks to all counsel. [00:40:07] Speaker 06: The case is taken under submission.