Gustavokathrinemaxine's Profile

95
Points

Questions
17

Answers
15

  • I would use not exists:

    select t.* from mytable t where not exists (     select 1      from mytable t1      where t1.policynumber = t.policynumber and t1.reason <> 'Return' ) 

    This query would take advantage of an index on (policynumber, reason).

    • 55 views
    • 2 answers
    • 0 votes
  • I tried a slight variation of this (listagg instead of string_agg) in Snowflake and it seemed to be getting the expected result

    with cte (item_code, abc, id) as (select item_code, a, 'a' from table1 union all  select item_code, b, 'b' from table1 union all  select item_code, c, 'c' from table1)   select item_code,         max(case when id='a' then abc end) a,        max(case when id='b' then abc end) b,        max(case when id='c' then abc end) c,        split_part(string_agg(abc::varchar,',' order by abc desc),',',1) largest1,        split_part(string_agg(abc::varchar,',' order by abc desc),',',2) largest2,        split_part(string_agg(id,',' order by abc desc),',',1) largest1_col,        split_part(string_agg(id,',' order by abc desc),',',2) largest2_col from cte group by item_code; 
    • 16 views
    • 2 answers
    • 0 votes
  • Asked on September 1, 2020 in Sql.

    I think this code could be useful too. This will return the order_id’s for whichever number of transactions is set in the @bill_freq variable. I used made up data.

    declare @bill_freq          int=2;  ;with rn_cte(order_id, transactionnumber, fiscal_week_no, wk_rn) as (     select *, row_number() over (partition by order_id order by order_id, fiscal_week_no) wk_rn     from (values ('00001', 278100, 1),                  ('00001', 278101, 2),                  ('00002', 278102, 3),                  ('00002', 278103, 4),                  ('00003', 278104, 5),                  ('00004', 278105, 7),                  ('00004', 278106, 9)) v(order_id, transactionnumber, fiscal_week_no)) select order_id from rn_cte group by order_id having max(wk_rn)[email protected]_freq; 

    Results

    order_id 00001 00002 00004 
    • 9 views
    • 2 answers
    • 0 votes
  • Asked on September 1, 2020 in Sql.

    This it tested in 19.0. I don’t have earlier versions to test on right now, but I think it will require at least 12.1.

    First, if you need the type to be an associative array (‘index by’), it needs to be in a package specification:

    create or replace package demo_pkg as     type t_table is table of varchar2(15) index by pls_integer; end demo_pkg; 

    Then SQL can see it:

    declare     -- type t_table is table of varchar2(15) index by pls_integer;     subtype t_table is demo_pkg.t_table;      from_list t_table;     to_list   t_table;      procedure get_something         ( p_in_list  in  t_table         , p_out_list out t_table )     is     begin         select dummy bulk collect into p_out_list         from   dual         where  dummy in (select * from table(p_in_list));     end get_something; begin     from_list(1) := 'X';     from_list(2) := 'Y';     from_list(3) := 'Z';      get_something(from_list, to_list); end; 

    From 18c you can populate the array declaratively using a qualified expression, e.g:

    from_list t_table := demo_pkg.t_table(1 => 'X', 2 => 'Y', 3 => 'Z'); 

    or

    get_something ( demo_pkg.t_table(1 => 'X', 2 => 'Y', 3 => 'Z') , to_list ); 

    Some of these restrictions are because associative arrays aren’t really a natural fit for SQL queries, and support for them took a while to be added. If you declare t_table as a regular nested table, it should work in an earlier version:

    create or replace package demo_pkg as     type t_table is table of varchar2(15); end demo_pkg; 

    or create it as a standalone SQL object:

    create or replace type t_table as table of varchar2(15); 

    This also makes a member of construction possible:

    declare     from_list t_table := t_table('X','Y','Z');     to_list   t_table;      procedure get_something         ( p_in_list  in  t_table         , p_out_list out t_table )     is     begin         select dummy bulk collect into p_out_list         from   dual         where  dummy member of p_in_list;     end get_something; begin     get_something(from_list, to_list); end; 

    member of only works with "nested table" collections, not associative arrays or varrays. I can never really see the point of varrays, unless the size limit is so useful for your business logic that you can live with all the lost functionality.

    • 10 views
    • 2 answers
    • 0 votes
  • Asked on September 1, 2020 in Sql.

    Unless you create a derived table after aliasing the columns you have to join on the non-friendly names. In your select you can alias the columns that are returned in the final resulset:

    Example to alias

    SELECT       "ColumnNm" as [friendly name 1],     "ColumnUserFriendlyNm" as [friendly name 2]   FROM table_1 

    Example to alias and use a derived table

    SELECT * from ( SELECT       "key1" as [friendly name 1],     "key2" as [friendly name 2]   FROM table_2 ) as table_2_aliasd 

    https://www.sqlservertutorial.net/sql-server-basics/sql-server-alias/

    If you need the alias to persist consider creating the the query using alias to a view

    create view dbo.vTable1_2 as SELECT * from ( SELECT       "key1" as [friendly name 1],     "key2" as [friendly name 2]   FROM table_2 ) as table_2_aliasd 
    • 7 views
    • 3 answers
    • 0 votes
  • Just to spell out something explicitly:

    The main thread is basically the UI thread.

    So saying that you cannot do networking operations in the main thread means you cannot do networking operations in the UI thread, which means you cannot do networking operations in a *runOnUiThread(new Runnable() { ... }* block inside some other thread, either.

    (I just had a long head-scratching moment trying to figure out why I was getting that error somewhere other than my main thread. This was why; this thread helped; and hopefully this comment will help someone else.)

    • 26 views
    • 30 answers
    • 0 votes
  • Asked on September 1, 2020 in Mysql.

    I am looking for some help with creating a function that generates an incremental number as a case id.

    stop looking – it is MySql 🙂

    So, if a person creates a case, they get an incremental case number

    this is

    id int(11) unsigned not null auto_increment primary key 

    Don’t touch the primary key, don’t let anyone touch it

    scenario when the person creating a case, gives it a custom case id as an external reference …

    This has nothing to do with primary key, if is a need create columns userId and reference for ZD-00009 and others – this columns should be only a number (foreign key) pointer to table of users or table of references – if they (users or reference id) are repeated.
    User case identifiers do not need to have consecutive numbers increasing exactly by 1, they must be unique and increasing, this is all. If that’s really not enough, add a timestamp to each case, but don’t touch id’s.

    • 10 views
    • 1 answers
    • 0 votes
  • The issues you face are:

    1. You need to pass the MULTI_STATEMENTS flag to PyMySQL, and
    2. read_sql_query assumes that the first result set contains the data for the DataFrame, and that may not be true for an anonymous code block.

    You can create your own PyMySQL connection and retrieve the data like this:

    import pandas as pd import pymysql from pymysql.constants import CLIENT  conn_info = {     "host": "localhost",     "port": 3307,     "user": "root",     "password": "toot",     "database": "mydb",     "client_flag": CLIENT.MULTI_STATEMENTS, }  cnxn = pymysql.connect(**conn_info) crsr = cnxn.cursor()  sql = """\ CREATE TEMPORARY TABLE tmp (id int primary key, txt varchar(20))      ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; INSERT INTO tmp (id, txt) VALUES (1, 'foo'), (2, 'ΟΠΑ!'); SELECT id, txt FROM tmp; """ crsr.execute(sql)  num_tries = 5 result = None for i in range(num_tries):     result = crsr.fetchall()     if result:         break     crsr.nextset()  if not result:     print(f"(no result found after {num_tries} attempts)") else:     df = pd.DataFrame(result, columns=[x[0] for x in crsr.description])     print(df)     """console output:        id   txt     0   1   foo     1   2  ΟΠΑ!     """ 

    (Edit) Additional notes:

    Note 1: As mentioned in another answer, you can use the connect_args argument to SQLAlchemy’s create_engine method to pass the MULTI_STATEMENTS flag. If you need a SQLAlchemy Engine object for other things (e.g., for to_sql) then that might be preferable to creating your own PyMySQL connection directly.

    Note 2: num_tries can be arbitrarily large; it is simply a way of avoiding an endless loop. If we need to skip the first n empty result sets then we need to call nextset that many times regardless, and once we’ve found the non-empty result set we break out of the loop.

    • 12 views
    • 3 answers
    • 0 votes
  • Asked on September 1, 2020 in Mysql.

    In the compose file, under ports: you creating a forwarding rule from localhost:3307 to container:3307.

    While you can choose any source port that is not in use on your host machine, at the other end you must hit the port on which the container is listening: in this case it would be 3306.

    Your docker-compose.yml file should look like this:

    version: "3.8" services:   mysql:     image: mysql:latest     ports:       - 3307:3306     environment:       MYSQL_ROOT_PASSWORD: SomeRootPassword1!       MYSQL_USER: someuser       MYSQL_PASSWORD: Password1!       MYSQL_DATABASE: wedding 

    …then you should be able to connect at localhost:3307

    • 14 views
    • 2 answers
    • 0 votes
  • Asked on September 1, 2020 in Mysql.

    I did a mini benchmark on this a while back. Conclusion. PDO and MySQLi are very similar but the features in PDO are worth using.

    http://cznp.com/blog/2/apples-and-oranges-mysqli-and-pdo

    • 18 views
    • 2 answers
    • 0 votes