This article will show how redshift table column values store more than 65k records. In Redshift, we can't store more than 65k in size in a single column. In this case, if we take the offer store database we are getting More than 65k records. For this following option will sort this
Step1: We need to find out which column array values come out more than 65k then we’ll write a query according to this
Step2: Exceed the array column we will call separately. Sample below
SELECT o."_id", o."EntityId", o."EntityType",
o."RandomNumber",o."ModifiedOn" as modified,o."Active.Offers",
o. "Active.Status",o."Active.ModifiedOn",
o."Active.SubEntityId",o."Active.Source",
o."Active.RequestedAmount",o.[Active.MaxAmount],
o.[Active.InitialApprovalAmount],oo.P_id,oo."SubEntityType",oo."SubEntityId",oo."Status",oo."RequestedAmount",oo."MaxAmount",
oo."InitialApprovalAmount", oo."ModifiedOn" as "ModifiedOn",oo."GeneratedOn",oo."Source",oo."Attributes",oo."Offers",oo."_Index"
from "offers.Offers" oo
inner join offers o
where o._id = oo.P_id
and "ModifiedOn.DateTime" >='${offers_offerstore_max_date}'
Above Query, we are getting exceeded column values in offers we split and used Offers values.