Help with failing DB migration for 10.0.0 -> 25.8.0 upgrade #7494
Replies: 3 comments 1 reply
-
|
There were 2 records without For now I did the following: After that migrations complete successfully. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for your detailed report, and sharing how you solved the issue ! I'm not sure why there is no "col" field in your some widgets... Related PR for the migration: #7396 |
Beta Was this translation helpful? Give feedback.
-
Perhaps? I don't know a lot about the history of the Redash server I'm upgrading, but I know that it has been around for a long time now. It started out as an AWS EC2 instance, but these days we run it in Kubernetes; however we've kept the same database in an AWS RDS PostgreSQL instance. I've found a lot of other differences in our database to what I'd expect to see... There are database indices in the new DB that are in the existing one, and vice-versa (lots of differences here that I won't show). Then there are these differences where our existing DB is the stuff in red (with the leading We don't have CREATE FUNCTION public.queries_search_vector_update() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
- NEW.search_vector = ((to_tsvector('pg_catalog.simple', regexp_replace(coalesce(CAST(NEW.id AS TEXT), ''), '[-@.]', ' ', 'g')) || to_tsvector('pg_catalog.simple', regexp_replace(coalesce(NEW.name, ''), '[-@.]', ' ', 'g'))) || to_tsvector('pg_catalog.simple', regexp_replace(coalesce(NEW.description, ''), '[-@.]', ' ', 'g'))) || to_tsvector('pg_catalog.simple', regexp_replace(coalesce(NEW.query, ''), '[-@.]', ' ', 'g'));
+ NEW.search_vector = ((setweight(to_tsvector('pg_catalog.simple', regexp_replace(coalesce(CAST(NEW.id AS TEXT), ''), '[-@.]', ' ', 'g')), 'B') || setweight(to_tsvector('pg_catalog.simple', regexp_replace(coalesce(NEW.name, ''), '[-@.]', ' ', 'g')), 'A')) || setweight(to_tsvector('pg_catalog.simple', regexp_replace(coalesce(NEW.description, ''), '[-@.]', ' ', 'g')), 'C')) || setweight(to_tsvector('pg_catalog.simple', regexp_replace(coalesce(NEW.query, ''), '[-@.]', ' ', 'g')), 'D');
RETURN NEW;
END
$$;A completely different type for the CREATE TABLE public.changes (
id integer NOT NULL,
- object_id character varying(255) NOT NULL,
+ object_id integer NOT NULL,
object_type character varying(255) NOT NULL,
object_version integer NOT NULL,
user_id integer NOT NULL,
change text NOT NULL,
created_at timestamp with time zone NOT NULL
); CREATE TABLE public.dashboards (
id integer NOT NULL,
updated_at timestamp with time zone NOT NULL,
created_at timestamp with time zone NOT NULL,
org_id integer NOT NULL,
slug character varying(140) NOT NULL,
name character varying(100) NOT NULL,
user_id integer NOT NULL,
layout text NOT NULL,
dashboard_filters_enabled boolean NOT NULL,
is_archived boolean NOT NULL,
version integer NOT NULL,
- is_draft boolean,
+ is_draft boolean NOT NULL,
tags character varying[],
options json DEFAULT '{}'::json NOT NULL
); CREATE TABLE public.queries (
id integer NOT NULL,
updated_at timestamp with time zone NOT NULL,
created_at timestamp with time zone NOT NULL,
org_id integer NOT NULL,
data_source_id integer,
latest_query_data_id integer,
name character varying(255) NOT NULL,
description character varying(4096),
query text NOT NULL,
query_hash character varying(32) NOT NULL,
api_key character varying(40) NOT NULL,
user_id integer NOT NULL,
last_modified_by_id integer,
is_archived boolean NOT NULL,
options text NOT NULL,
version integer NOT NULL,
- is_draft boolean,
- schedule_failures integer DEFAULT 0 NOT NULL,
+ is_draft boolean NOT NULL,
+ schedule_failures integer NOT NULL,
search_vector tsvector,
tags character varying[],
schedule text
);A different column type. CREATE TABLE public.query_results (
id integer NOT NULL,
org_id integer NOT NULL,
data_source_id integer NOT NULL,
query_hash character varying(32) NOT NULL,
query text NOT NULL,
data text NOT NULL,
- runtime real NOT NULL,
+ runtime double precision NOT NULL,
retrieved_at timestamp with time zone NOT NULL
);This table doesn't exist at all in the fresh install. -CREATE TABLE public.query_results_json_cache (
- id integer,
- query_hash text,
- data jsonb
-); CREATE TABLE public.users (
id integer NOT NULL,
updated_at timestamp with time zone NOT NULL,
created_at timestamp with time zone NOT NULL,
org_id integer NOT NULL,
name character varying(320) NOT NULL,
- email character varying(320) NOT NULL,
+ email character varying(255) NOT NULL,
password_hash character varying(128),
groups integer[],
api_key character varying(40) NOT NULL,
- profile_image_url character varying,
+ profile_image_url character varying(320),
disabled_at timestamp with time zone,
details json DEFAULT '{}'::json
);Then some extra constraints are in the new DB that don't exist in ours. +ALTER TABLE ONLY public.organizations
+ ADD CONSTRAINT organizations_slug_key UNIQUE (slug);+ALTER TABLE ONLY public.query_snippets
+ ADD CONSTRAINT query_snippets_trigger_key UNIQUE (trigger);+ALTER TABLE ONLY public.users
+ ADD CONSTRAINT users_api_key_key UNIQUE (api_key);I don't know if these are the result of missed database migrations when Redash has been upgraded in the past? I don't know if I need to address these differences or not going forward? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi.
I'm having problems getting the database migration to complete when I upgrade from the
10.0.0.b50363Docker image to25.8.0.The migrations start off okay like so:
... skipping a bunch of lines like the above and then getting to an exception
It looks like it is choking on trying to insert a null value for the
optionscolumn in thewidgetstable for the record withid=307.i.e.
DETAIL: Failing row contains (307, 2019-03-07 03:08:05.000607+00, 2019-03-07 03:08:05.000607+00, 3056, null, 1, null, 4).The
widgetstable both before and after the migration has aborted looks like so:The record in question looks like so:
The value that is in the
optionscolumn is valid JSON as far as I can tell sincejqcan parse it:I think from the logs above, the migrations that is failing is the one in
/app/migrations/versions/db0aca1ebd32_12_column_dashboard_layout.pyin the Docker image?The
upgrade()function in that looks like so:Is it because that first statement is referring to a
colfield in thepositiondictionary, and not finding one perhaps?I suspect I need to update the options field for this record and add a col value (
"col": 0or something like that?) prior to trying to perform the migration?I don't know the correct way to proceed from here and was hoping for some guidance.
Beta Was this translation helpful? Give feedback.
All reactions