Discussion:
creating tables in S3
Knapp, Michael
2017-05-15 20:20:55 UTC
Permalink
Hi,

So I have a directory full of pipe separated value files. I was hoping to convert these to parquet using Drill’s CTAS command. I tried this:

create table s3.tmp.`my_table` (x, y) as SELECT COLUMNS[0] x, COLUMNS[1] y FROM s3.`path/to/my.tbl`

after a little time, I get this error:

org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: Schema [s3.tmp] is not valid with respect to either root schema or current default schema. Current default schema: No default schema selected

I am able to query data from that S3 file.

This is my S3 plugin configuration:

{
"type": "file",
"enabled": true,
"connection": "s3a://my_bucket",
"config": null,
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
},
"tmp": {
"location": "drill-tmp",
"writable": true,
"defaultInputFormat": null
}
},


I have created the directory “drill-tmp” in my bucket, and it is empty.

The file is a pipe separated value, so it does not have a schema. Does anybody know what I’m doing wrong or how to get this to work?

Michael Knapp
________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Charles Givre
2017-05-15 20:34:32 UTC
Permalink
Hi Michael,
A few questions:
1. Does the origin query work?

SELECT COLUMNS[0] x, COLUMNS[1] y FROM s3.`path/to/my.tbl`

One thing that jumps out at me is that I think "columns" has to be lower
case.

2. Did you set up the .tbl extension to read pipe separated files? That
also could be causing problems if not.

-- C


On Mon, May 15, 2017 at 4:20 PM, Knapp, Michael <
Post by Knapp, Michael
Hi,
So I have a directory full of pipe separated value files. I was hoping to
create table s3.tmp.`my_table` (x, y) as SELECT COLUMNS[0] x, COLUMNS[1] y
FROM s3.`path/to/my.tbl`
Schema [s3.tmp] is not valid with respect to either root schema or current
default schema. Current default schema: No default schema selected
I am able to query data from that S3 file.
{
"type": "file",
"enabled": true,
"connection": "s3a://my_bucket",
"config": null,
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
},
"tmp": {
"location": "drill-tmp",
"writable": true,
"defaultInputFormat": null
}
},
I have created the directory “drill-tmp” in my bucket, and it is empty.
The file is a pipe separated value, so it does not have a schema. Does
anybody know what I’m doing wrong or how to get this to work?
Michael Knapp
________________________________________________________
The information contained in this e-mail is confidential and/or
proprietary to Capital One and/or its affiliates and may only be used
solely in performance of work or services for Capital One. The information
transmitted herewith is intended only for use by the individual or entity
to which it is addressed. If the reader of this message is not the intended
recipient, you are hereby notified that any review, retransmission,
dissemination, distribution, copying or other use of, or taking of any
action in reliance upon this information is strictly prohibited. If you
have received this communication in error, please contact the sender and
delete the material from your computer.
Padma Penumarthy
2017-05-15 20:45:30 UTC
Permalink
I am wondering if location information in plugin configuration should be

"location": “/drill-tmp” (instead of “location":”drill-tmp”)


Thanks,
Padma
Post by Charles Givre
Hi Michael,
1. Does the origin query work?
SELECT COLUMNS[0] x, COLUMNS[1] y FROM s3.`path/to/my.tbl`
One thing that jumps out at me is that I think "columns" has to be lower
case.
2. Did you set up the .tbl extension to read pipe separated files? That
also could be causing problems if not.
-- C
On Mon, May 15, 2017 at 4:20 PM, Knapp, Michael <
Post by Knapp, Michael
Hi,
So I have a directory full of pipe separated value files. I was hoping to
create table s3.tmp.`my_table` (x, y) as SELECT COLUMNS[0] x, COLUMNS[1] y
FROM s3.`path/to/my.tbl`
Schema [s3.tmp] is not valid with respect to either root schema or current
default schema. Current default schema: No default schema selected
I am able to query data from that S3 file.
{
"type": "file",
"enabled": true,
"connection": "s3a://my_bucket",
"config": null,
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
},
"tmp": {
"location": "drill-tmp",
"writable": true,
"defaultInputFormat": null
}
},
I have created the directory “drill-tmp” in my bucket, and it is empty.
The file is a pipe separated value, so it does not have a schema. Does
anybody know what I’m doing wrong or how to get this to work?
Michael Knapp
________________________________________________________
The information contained in this e-mail is confidential and/or
proprietary to Capital One and/or its affiliates and may only be used
solely in performance of work or services for Capital One. The information
transmitted herewith is intended only for use by the individual or entity
to which it is addressed. If the reader of this message is not the intended
recipient, you are hereby notified that any review, retransmission,
dissemination, distribution, copying or other use of, or taking of any
action in reliance upon this information is strictly prohibited. If you
have received this communication in error, please contact the sender and
delete the material from your co
Loading...